Bixby Developers Chat

Healthcare, Browsers and Voice with Nick Myers - Episode 39

September 13, 2021 Nick Myers Episode 39
Bixby Developers Chat
Healthcare, Browsers and Voice with Nick Myers - Episode 39
Show Notes Transcript

In this episode, Roger talks with Nick Myers, the CEO of Redfox AI. Redfox just introduced the V Lab voice assistant which solves a very challenging issue in health care.

Roger and Nick discuss

  • Nick's professional background and how he started with Voice
  • The founding of RedFox.AI and their pivots
  • Nicks very personal story about V Lab and why it's so important to him
  • Voice and the web - why it makes sense but why it hasn't gained traction
  • Building an independent voice assistant
  • Synthetic voices vs voice actors
  • Nicks predictions for voice
  • And much more . . . 

Links from the Show
RedFox.AI -

Keeping in Touch
Nick's e-mail -
Nick on Twitter - @therednickm
Nick on LinkedIn -

Host: Roger Kibbe
(@rogerkibbe) is a senior developer evangelist for Samsung

All Bixby Developers Chat Episodes are available on your favorite podcast player and at:

More about Bixby
Samsung Bixby is a next-generation, AI platform that enables developers to build rich voice and conversational AI experiences for the Bixby Marketplace, and Bixby devices including phones, watches, televisions, smart appliances, and more.
Bixby Developers Homepage -
Bixby Developers Github -
Bixby Developers YouTube Channel -
Bixby Developers Twitter -
Bixby Developers Facebook -
Bixby Developer News/Blogs -

Support the show

Nick Myers - Episode 39

 Hello, Bixby Developer Chat listeners. In this episode, I'm talking to Nick Myers. Nick is the CEO of red Fox AI and red Fox has just introduced their V lab voice assistant. This is focused on the healthcare industry in solves a very vexing problem in that industry. 

Nick discusses his history of voice, trying many things and eventually pivoting to V lab development. Uh, why this is a very personal journey to him. We talked about voice and medicine, voice enabling the web, why it makes so much sense, but why voice has struggled to gain traction on the web? 

We discuss independent voice assistance, synthetic voices versus voice actors. The future of voice and much, much more. I think you will love this episode. Nick has a passion for what he does. And this clearly shines through. Without further ado here's my discussion with nick Myers. 

Roger: Today. I have the honor of talking with Nick Myers. Nick is the founder and CEO of red Fox. A podcast host and an overall thought leader in conversational AI and voice. And I'm super excited to talk to Nick today.

Nick, please introduce. 

Nick: Thank you for having me on Roger. This is awesome. I know it's been quite a while since you and I have had the opportunity to catch up. And I think this is going to be a really good way to do that. Well, diving into some exciting voice stuff, but yeah. Thank you for the intro. Like you said, I am the founder and CEO of a startup based in Madison, Wisconsin called red Fox.

Right. And we essentially help biotech and life sciences organizations leverage conversationally AI with a specific focus on home health and at home diagnostic testing, but recently launched a brand new product that we build. I'm sure we'll talk about that. But of course you mentioned I'm a podcast host as well, like yourself.

So, I've been hosting the artificial podcast since may of 2019 then love doing it. It gives me a great opportunity to chat with people from all over the world, working in AI and voice, and really any of them. And along with that prior to COVID, I did a fair amount of speaking and I'm hopeful to get back into that again, soon as things hopefully start to open up more and calm down here, but yeah, that's a bit about me and thank you again for having me on the big speed developer 

Roger: chat.

Absolutely. Thanks. So you're looking forward to it. Conferences coming back. So 

Nick: my, yeah, a hundred percent, like I think I learned through the pandemic that somebody told me this analogy once where you have a stack of cups and like a pyramid in your life. And you're continuously trying to fill those with things that give you purpose and value.

And for me, up until the pandemic, I didn't realize what filled me up all the way to the top. Then ultimately let everything else spill over in other areas of my life. So when that spicket got turned off and public speaking went to the wayside, you know, I actually really struggled for several months throughout the pandemic because I couldn't get that release in that thing that I was really passionate about.

Now, luckily, of course, been able to work through that and, did some virtual conferences and some other things. And the last several weeks I've done a couple of events. So things look like even in our fourth or fifth wave of COVID, however you want to look at it. Things seem to be coming back. But yeah, I really love speaking and I really can't wait to start doing that more and more again.

Roger: Yeah. It's amazing. When Sutton is taken away from you, so to speak how much you realize how important it is. 

Nick: A hundred percent 

Roger: I love in person events. I love speaking, but even more yeah, I attend the sessions, but what I love is roaming the hallways and findings. I'm like, oh, I know you from Twitter.

And then it's kind of serendipitous conversations that happen. I super look forward to that. Yeah. 

Nick: Same thing. What I want to do, 

Roger: Nick is bringing you back to what's your very first experience with voice. 

Nick: That's I love that question because I'm sure as you've experienced being in voice, that everybody kind of has that one experience that they can draw back to.

It's like, oh yeah, I did this with Alexa. Or I did this with big spear or whatever it may be that pushed them into this industry. And for me, It really came down to ordering toilet paper. This is, uh, it was back in, was it, I got my first Amazon echo in December of 2018 for Christmas. My mom had gotten it for me for the holidays because I saw this on TV and it looks really cool.

And you like texts. So here you go. And I'm like, all right, I've seen these things on TV and I know other people had them. I did have one, so I set it up and it was just. Fast forward to January of 2019. And it was very cold, like negative 20 degree windshield day here in Madison talking about that snow.

We were talking about earlier and I was in my bathroom and I realized fully I'm running low on toilet paper. I mean, it was like emergency scenario. I had one roll left and I thought to myself, there is no way in hell I'm going outside like that. That's not happening because it was so cold.

Like why would I do that to myself? And then I pulled up my phone and I'm like I could order it on my Amazon app. And then for a split second, the thought passed through my mind. And I thought, I wonder if that thing on my kitchen collar can order it for me, that thing being my Amazon echo and Alexa.

So I was like, Alexa, I need toilet paper. And I will still remember it to this day, Roger, that an in less than 30 seconds, I had Alexa recommending me a 12 pack of Cottonelle priced at $5 and 36 cents courtesy discount from Alexa, but shipped to my doorstep at 24 hours. So it was already going through some transitionary periods with the company at the time and what we wanted to focus on next, because we were in VR and it wasn't working out, but that one experience made the light switch turn on in my brain.

And I thought, okay. There's something here. And this was, of course this, I was thinking, 1,000,001 different things, all very general at the time. But with how easy that interaction was, this is truly going to be the new interface or the next interface. Or we want to look at it between humans and technology.

So that's really what started the journey for me. And even before I got on the board, I would just became super fascinated with how Alexa works. So I became really obsessed with AI and I'm a tinker. I love taking things apart and learning about them putting back together. So I just started diving down the AI rabbit hole and learned everything I could there.

And then, that just transitioned to be seamlessly into voice tech. So that's my story and how I got started in all this awesome 

Roger: V calmer. I don't know if I've talked to anyone else. Who's first real, V commerce has got him into there, which is such, it's the promise.

But that's fabulous. Aha. And what a practical use. Right, 

Nick: right. 

Roger: So let's jump and talk a little bit more about red Fox. What I'd love to hear is just a little bit of the history. Like what, when and why did he found it? And then maybe you can tell me about as you and I have talked, it's kind of.

Right over the years to go different, do different things. So maybe you can start at the beginning and talk about some of the evolution and what you guys have. 

Nick: Sure. So in our hop into our DeLorean here and go back in time, my corny little morning, little metaphors are like but back at the beginning, so we're in late 2016 and I have my first full-time job out of college.

I'm. Leading marketing for a national trucking company. Again, I don't know. It was just one day. I thought to myself, I want to start my own business and that's nothing against her. I was working at the time. I'm still actually very good friends with the owners today. They're actually clients of ours. But I don't know.

It was just, there was this one moment. That I said, I want to start a company. So I went to my best friend who also told me he always wanted to start a business and we put our heads together and we decided, well, what are we both good at? And I said, well, I'm really good at digital marketing. That's my background is marketing communications PR and Brett's background is he's kind of a Jack of all trades.

So we decided, I think we want to start a digital marketing business. Everybody. I shouldn't say everybody, but a lot of people do digital marketing, right? You go any city in the United States, at least, , a larger metropolitan area and you have, 50 different digital marketing agencies.

So we decided we wanted to take more of a tech approach to that because I've always loved technology too. And deep down in me somewhere as I was getting into marketing, I knew I was going to wind up in tech somehow. I just didn't know that. But we wanted to take a technology approach with red Fox creative.

The company was called at the time. So we started working with virtual reality and we got ahold of some market research and we're like, oh my God, VR is going to be this huge thing in 2017 and 2018, then started looking at 360 video and the evolution of how we produce content. So we actually bought a 360 video camera that could.

Footage and VR 360 taught ourselves how to film with it. Edit. We actually got a couple of clients out of it and we're doing videos. But of course, several months into that, things in the VR space just were not taking off. Like we thought they were. And I think, flash forward to 2021 and VR has definitely evolved and become bigger, but it's still not this explosion, right.

That's impacting everybody's lives. So ultimately. Brett, who was partnered with me, who was also the CTO. I'll get to that in a bit of a red Fox. He took a step back. So then I really had to decide, what do I want to do with Redbox creative? Do I want to keep the company, do I just want to focus on my full-time job?

And ultimately after talking with family and friends inside of the, keep the company and just see where things went. So then flash forward to late 2018. With that story I just told you about and, or no. Yes. 27. Flash forward to 2017 into 2018. That story I just told you about, and I had that experience ordering toilet paper with my Amazon echo.

And then from that moment forward, it was all right. This is how I want to position the company. So 2018 was more or less a year of repositioning things. Learning about AI, learning about. Building a very robust speaking career for myself, just because I felt like I was absorbing this information so quickly.

I had to teach other people about it and realized there was a huge gap in just general knowledge of AI and the impact of technology. So 2018 was a building year and then late 2018, early 2019, decided to rename the company to Redbox AI and focused only on voice technology. And initially when we made that transition, the goal was of course, to leverage tools like Alexa and Google assistant, and big speed of developed voice applications, for enterprise.

Which of course, as we look back, even to 2018 and 2019, it's still very new, right. Especially to enterprise. And we're still working on a lot of the kinks with enterprise, but that's what we wanted to do. So ultimately, Brett, step back, He asked, how can I be a part of this? I know, you have this really robust speaking career now.

And there seems to be a lot of opportunity with voice tech. So I told him, I go, well, if you want to hop back in, you can learn how to program. I kid you not Roger with them several months. This guy was like, I can't believe how proficient he became a programming and learning how to build voice applications.

Self-taught and Brett and I are very open with this. Teach ourselves stuff. And he taught himself how to computer program and build voice applications. So from there, it was, just kind of odd projects here and there and building voice applications for different people we have in our network and some, small to medium sized businesses for the most part.

And this was all while I was still working full time. So now we fast forward through 2019. We're speaking career grew even more and Brett's become really proficient in voice applications and. At the end of 2019, we have all these plans for what we want to do with the company in 2020. Well, it just so happened that in February of 2020, as I transitioned out of my old job to go focus on Redbox full-time wham pandemic hits.

So now the uncertainty that was already met with how are we going to move the company forward? I figured we had two different, I won't call them options, but two different paths that we could go down. So the first was we can either sit here and crumble and watch everything that, we had been working towards the last year and just kind of fade away, or we could really sit down and hammer out and figure out the direction we want to head into.

And I'm happy to say, of course, that we chose the latter of those two different paths. And ultimately I really think the pandemic forced us to narrow down who we were trying to build with voice applications for at the time and give us a really specific focus and a really specific market to go after.

And of course, we also experimented with voice commerce, three commerce too. And you may remember me seeing posts about trying to. This food delivery service called blue Fox box that we actually built a custom ordering system for Alexa, fortunately, that did not pan out lot of overhead costs. And then a lot of reasons why that didn't work out, but ultimately we still were able to learn valuable skills that we then transitioned into where we are now.

And ultimately Bret got some experience building, you know, in order to bring system for Amazon Alexa, which we found to be really valuable too. But ultimately, as we were moving through the pandemic and figuring this stuff out, I thought about an experience that I had as a kid. And I'm very, very open about this.

I'm a survivor of child leukemia. I have acute lymphoblastic leukemia as a kid. And I thought about a specific experience I had where it was around, 2003, 2004. I was undergoing chemotherapy treatments, both in the hospital and at home. And I had a very, very hard time taking oral medication. Like I just could not take pills.

I would psych myself out and vomit and it just was not a good scenario for anybody. So ultimately my doctors approached my parents and said, well, he needs to get the chemotherapy and he can't take pills. And the only other option is to give him injections at home. So that maybe got 15 minutes of training at the hospital and how to give me these.

And to me even now, you know, fast forward, how many years later, and then, Adulthood. I wonder why that training wasn't more time, but it was only about 15 minutes when I could remember, but of course, then the time came for them to actually give me these injections at home. And I remember sitting in our bathroom and as they were about to give me these, 1,000,001 questions start popping up and I remember, they want to just, some of these simple questions answered that they couldn't remember what the instructions were at the hospital.

And, you know, when you were about to inject chemo into your son, you want to make sure you're doing that appropriately. So I remember them calling into the hospital and just going through this awful IVR system that just kept redirecting them down ever in 1,000,001 different paths, not to where we needed to go or where they needed to go.

And finally, after a. 30 us felt like 30 to 45 minutes. They finally got in touch with my doctors. And of course, the internet wasn't even an option back then, we're talking like 2003 and four where you could only access it on a desktop that was super slow and websites, especially hospital websites were just bad.

So I really thought about that experience and it, I went to Brett with that and I'm like, this has stuck with me all these years and I really want to make sure nobody ever has to experience that. Like I did, because I know that it's still a problem. And the reason we knew this was still a problem is because there's some of the investigating we are doing to try and narrow our focus.

Getting in touch with some biotech companies and at-home medical testing companies, and this problem is still there for people. So I really thought about that experience and said, we can use voice to solve this and make sure nobody ever has to deal with this ever again. And this was also during a time where for years I had been talking about we're really since getting into voice technology about wanting to build our own voice, assistant our own digital assistant and nothing against, Alexa, Google assistant, Bixby, or any of the larger platforms.

It's just, I've always wanted to build our own because doing so would allow us to really solve this problem the way that we want to and really do anything with it. Back in back in March, we started going through, well, how would we actually build? And I'm talking March of 20, 21 and we're going through, like, how do we actually build a voice assistant?

Can we even do this? Do we have the skill set? Do we have, the aptitude? And ultimately we just started researching, started reading. Brett just kind of took the whole project by the reins and went for it. And about two and a half months later, we had a fully functioning, digital assistant that was deployable and ready to be used on any website we chose deployed on with all of the features and everything that we have built into this thing to solve these specific problems for home healthcare and to those listings.

Like, are you sure it works and only took two and a half months? That seems like a really short period of time. And I will agree with anybody listening who may be thinking that because I still can't believe we did it in two and a half months, but I kid you not. We did this in about a two and a half, three month timeframe in order to get the product to the current stage it's at where we already have active customers in the pipeline that are very interested in working with us.

I should also back up and say, we built the system that we call V lab, which is our digital assistant that we have built from the ground up. We did build a version of that for Alexa and Google. And we had that prior to our own digital assistant, but again, we thought let's take the same exact system and concept we built for Alexa and Google assistant and let's build our own digital assistant with it.

So that's really the story of how we got from digital marketing company with VR to pure conversationally. I invoiced tech company for biotech and life sciences. So yeah, that's. 

Roger: Wow, great story. And I love, as people ask, what should I build? Nice. , build from the heart and boy, talk about building from the heart. Wow. I experienced this as a kid. And that had to be super difficult. I could just type, as you were describing that I could see your parents just freaking out, right? Yeah. Or kid needs still, what do we do? We can't get through there's some damn IVR we're pressing every button. It was just like, I was just this scullery ex experiencing that.

 What an amazing story and what a, wow. This is still a problem. And you're so, so right, around, medical care and voice. So I want to talk about , how doctors and patients use it but I want to back up a little bit and talk about the technology and you started to get into you built, so you built it for Alexa.

And then you said, no, we're going to build something custom dive a little bit deeper into what were the limits with Alexa and Google that were preventing you from staying on those platforms? Yeah, 

Nick: that's a great question, Roger. So, I think for us, it was because we really wanted to tailor this digital assistant to solve incredibly specific problems for patients who are completing at home diagnostic tests or using at-home medical devices, I mean, are really receiving any type of home health care.

We could do that with the other two, but some of like our authentication feature that we built into this, the way we built that we couldn't do that over Alexa or Google. We wanted to make sure that, if we worked with the provider that, because we anthropomorphize anthropomorphic, you know what word I'm trying to say?

We have to promote precise, so many different things and say, for example, a provider was offering a testing product that they already had a voice for that we wanted to make sure we could clone that. And actually. Like the voice of that product for the digital assistant, because we tailor the digital assistant to the needs of the provider.

We could not do that with Alexa and Google. Well, we could do it with Alexa contacted Amazon. I'm under NDA, but I'll say as it wouldn't work out for us, the way we needed it to and there was just a handful of different data tracking components to this, because there are so many different pieces of data that we want to collect and that we need to collect in order to analyze what we need to hand it off the provider, that the way we had to build, and it just would not have worked appropriately over Alexa and Google from a data tracking perspective and a data analytics perspective.

So those are just a couple of things, which is why we felt that we were just being kept out by the tech of Alexa. And then again, looking at building our own custom solution to this, it just, the flood gates pretty much just opened immediately after we built this thing. And also, when we look at trying to build what we want to for Lex and Google, you know, somebody has to own one of those devices, right?

I mean, let alone, like everybody has an Android phone, iPhone, so you can get these apps on it. I'm all about ultimate accessibility. And with V lab, we built it. So it can be deployed on any website. So you don't have to go into a native app. You don't have to use a piece of hardware.

All you have to do is open the web browser on your phone, go to the website and tap on it. And you're there, and we wanted to make it as accessible as possible. And we may talk about this, I just viewed the web as such green space for voice right now. And the, having the capability to use the word.

For our digital assistant and vice versa just made so much sense from an accessibility standpoint. So those are some of the reasons why we chose to build our own versus sticking strictly with Alexa and Google 

Roger: assistant. Totally makes sense. I've heard uh, likewise things when people go their custom way is it was too limiting.

And it's going to be super intense. As we see, I think there's a rise of lots of independent voices since happening right now. And I think it's because the laser platforms are really limiting in what you can do. And there are anybody who's developed forum or even thought about, oh, I'd love to do this.

, no, no, no, no, can't do that. Has run into, bang your head against the wall enough times. It's okay, ready to go walk around the wall. . Well, 

Nick: I mean, I will say in defense, of course, of all the larger platforms, like the true benefit we saw was the wide market share and reach they have, right?

 You look at the numbers that brick and up puts out and everything. And it's clear that, most people and most adults in the United States have, Google assistant and Alexa, big speeds. So that's the market reaches appealing. But again, from a specific problem, solving, using technology as a tool to create a positive human outcome, we needed to build our own.

Roger: Being web-based and I N I do definitely want to talk to you about web based and voice. Cause I'm very bullish about that, but it means it doesn't mean, so a lot of people have it, not everybody has it, the web pretty much is universal. And I am not completely, but that's just, that's like baseline.

So if it works on the web, then anyone in any situation who can access the web can go and use that. And I think that's fabulous and that's empowering and not saying, oh, you have this medical issue. Now you got to go buy this gadget, right. To help you with this. It's no, launch it through a web browser.

So tell me more about, okay. So you add it to your website. Tell me a little bit more about that. If I'm a company, you give me an example of a provider who might be interested in and what the process would be to add V lab to there. 

Nick: Yeah. So the way we built it is, and again, Brett knows way more of the techie side than I do.

So I'll do my best to sum up from how he's explained it and how he's shown it to me and how I would explain it to him 

Roger: process too. what do they do? Not just the tech. The tech is great, but also okay. I provide this medical device and I want to add instructions on V lab.

What's the process that I do. 

Nick: Yeah. So essentially, say a provider comes to us and say, Hey, we have this at home diagnostic test to measure cholesterol, but we've noticed that, let's look at the problems. Of course, some of which have already, Raised here where patients are having a hard time completing the test because the instructions are confusing or they can't get in touch with anybody.

And their call center is being backed up or likewise, they complete the test, but they complete it wrong and turn it back in. So it's moot anyhow, or they complete the test, but don't turn it back in. So all of these problems we have, we believe we have solved with the lab and the V lab digital assistant.

So essentially what we do is there's a setup and configuration because every test is different. So, , we have standard dialogues and we have all the different features that we built in. We operate on a SAS tiered pricing model. So it was just kind of what you want and we'll set up and configure it based on these different tiers that we have.

We put it on the website. So we essentially embedded, you can either do one page on the website or all pages. And of course we designed it in a way, so it's eye catching. Somebody notices that it's there and the way we have it. So this was really funny how, for some of the design of it. So, you know, on Mac, how, when you minimize a window, it does like that really fluid motion.

I told Brad, I go. Our voice assistant to function that way, because we went back and forth for quite a while about, do we want to have a visual chat box? Do we want people to have the ability to scroll through what was said in different things? And we ultimately decided, yes, because again, we don't want people to assume this as a chat bot because it's not, the distinction I make between a chat bot and a digital assistant as a chat bot is just text entry.

Whereas a digital assistant is truly conversational and learns from the conversation. It gets better and speaks to you and interacts with you. Those, that is what I view as the key difference. So I was very hesitant to add a chat box window because I felt like it was getting to chat body. I told Brett if we design it this way, where you have that design, where somebody taps on it and it does that fluid open motion and just have the back and forth readout that I'm okay with.

So he did that so we can embed it on any webpage throughout the entire website. I'm of course we can place it anywhere, but yeah, it also resizes itself to fit the windows. So it's responsive, like most websites are. So it's just a matter of, the patient receives their cholesterol test and either, they get the box and it tells them, we have the digital assistant available or they're naturally gonna go to the website anyhow, to look for instructions or big Q and A's answer, which is, 75% of the time we're estimating.

So they go there, they just tap on it. Boom. They're talking with the digital assistant. The digital assistant is helping the patient and improve the patient's outcome from using the medical tests. And it's also collecting a handful of different data points and also ensuring that the provider. Gets what they need in terms of the tests being completed accurately and efficiently, supporting the patient, reducing call center queues, which is reducing operational costs for them and a handful of other things.

We're targeting, I guess we're more B2B, but we're serving, patients and providers, but of course what the market we're in, you can't have a provider without a patient, right. So we're helping to support the patient, but the provider is also, our target and who is actually getting the benefit of that.

Makes sense. 

Roger: Totally. , wow, this sounds awesome. I'm just thinking of the traditional, you take home some test and you get it out of the box and I need to do something with this and there's like, Folded piece of paper, an eight point font, correct height telling you what to do.

And you're trying to read that or remember what to do, which is I have to hold it and read it. And then I got to put it down and remember what to do. And I can't remember all the steps and it's just crazy. I can see how people, I've done those kinds of things and I read it and I've actually written the steps down.

 My dummy's guide to it. So I don't screw it up. Cause I can't read and do it at the same time. So this is brilliant. I love this idea. So I would go to the website, I would click and it would be like, okay. And then would walk me through it. And because it's voice, which is brilliant.

I could actually listen to that as I was actually taking the test or doing 

Nick: absolutely the test and we've designed it. So it really is meant to be. Like somebody is there with you supporting you and walking you through the whole thing, because on the patient end, it actually stops checks in with them.

You know, do you need this repeated, where are you at in this whole process? And from the moment the patient authenticates and starts taking the test using the medical device, that data is now tracked over time. So the provider knows exactly when they start and where they're stopping throughout that journey so they can get that data for future optimizations.

So yeah, it really is about, again, at the end of the day, solving that problem of frustration and just unnecessary struggle that people have to go through getting information when they need it for at home testing, home healthcare, medical devices, one of the provider, and they're able to save a lot of money through automation and they make.

When the test is returned, that's their ultimate goals or proving that outcome for them, helping them reduce operational overhead, and really just streamlining their whole process and getting useful pieces of data that they just have a very hard time collecting right now. So they can make the test better future versions down the road.

So I know that hopefully it doesn't sound like we're trying to pack too much into this package, but it's, we're trying to solve so many different things while still focusing on the core of the problem that is making sure nobody ever has to go through that struggle of a really bad IBR system.

Or just getting a simple question to answer a simple instruction for a test, 

Roger: It strikes me it's a big problem, but the solution you're talking about is actually relatively right. So you're talking broad scope, huge problem. Massive problem. But what you're doing is saying, instead of that folded piece of paper, I had this conversational AI interface in front of that.

The walk me through what to do, answer questions, go do all this, and do it in a way. , that's just, it's like the perfect use case for voice. And I love 

Nick: that. Thank you. I appreciate that. 

Roger: Yeah, no, it's interesting. When you talk to anyone in the medical profession and I've talked to doctors and got even my, like my own doctor and I said, oh, I'm doing things with voice.

He's like, oh, that's really interesting. And he and I had a, it was like a half hour appointment was like an hour. Cause we spent talked about voice for half an hour. But what he was explaining to me is I know you come in patients this, and then I need to have them go home and they need to take a test at home or they need to take the medication.

And adherence is really. People get confused. They forget, they don't know how to do it. And so I tell them, these are necessary things to happen, but if I get people don't do it, then what happens? They're back in my office. With worst problems or they need questions solved. So it really is a benefit to the doctor and the PA I love this thing.

It's like the win win-win across the board. At first it's a win for the patient because the patient can do self care at home when, for the doctor, because the doctor says, oh, wow I know I treated this patient at home and there's a lot of economic things going on with doctors right now to push them to preventative care and at home treatment.

And when the provider the tests, because boom, you take the tests. And you do it. Right. And you . And it's not, they screw something up and have to send out another test and all that hassle. 

Nick: Well, we view this as being very timely too, because had COVID not happened. The home health care market was already expected to grow, but COVID just accelerated that by 100 X.

I mean the recent figure I saw the whole, the global home health care market is expected to be worth about $512 billion by 20, 27. That's not a small amount of money, that shows that there is going to be, and already is a very strong demand for home health care. Especially when we look at the preventative piece of that, like you just mentioned, because I think we're, COVID two people recognize that.

I think they're just generally more cognizant of their overall health and the importance of finding something early. So you don't have to deal with, very costly and horrible ramifications in the future. So I also find, what we built here is very timely for just the home health care industry in general.

Which is exciting as 

Roger: well. Yeah. There's lots of names for it, telemedicine and there's a lot of that is just, well, zoom we're on zoom is recording this and that's great. I remember my doctor, I did it over zoom, and I thought it was interesting and they can do that now, but this is the next step early is to go and digitize, use the digital capabilities to help you.

Do that preventative care testing that you need to do and that's, and that the ceiling is very high for this, right? Because healthcare, everybody talks about it, right? Ripe for disruption. We should modernize that, do that. And I love things like what you're building, because like I said, this is a, it's a giant, giant, huge, enormous problem in industry.

And yet what you're taught, what I love is it's not super complicated, what you built and I'm in, not in the best way. It's enough to give you that, those instructions in that way to get it halfway through the thing. I love relatively simple answers to complicated 

Nick: questions. We didn't, and this I've always been.

And I maintain this to this day. I've always viewed technology as a tool, and it is up to us as humans to choose how we utilize that tool. And in my case, I have always viewed technology as a tool that really can if use properly generate a positive human outcome for someone. And I think the best tech practitioners throughout history has viewed it that way.

I mean, let's be real. The iPhone would not be what it is today. I don't think has Steve jobs viewed technology as in a very similar way right now, I'm not trying to compare myself to Steve jobs. I'm just saying like the perspective of technology is the tool that makes life better for people. It is the tool that is not just the solution.

That is a tool that gets you to the solution that makes something better for somebody. And that's how I've always viewed technology. So that's. The perspective that I took. When again, we took that step back during the pandemic and decided how can we use voice and conversational AI as a tool to create this positive human outcome and solve this problem that is affecting millions of people.

Roger: Oh, it's fabulous. And you're absolutely right. Technology should empower us to live better richer lives. And I think sometimes in a way we just need that question gets reversed. Oh, you're right. Like Steve, Steve jobs realized that, I mean, he was a brilliant about figuring out and saying, okay, I can build some tech because people need this.

Even though they don't know they need it, which is a rare person and can figure that out. It corrects usually a percent, usually a that's what was his genius, is seeing what others couldn't see. But would work so well.

 I want to pivot a little. We started talking about voice in the browser and I will say I'm super bullish about it.

I'm super bullish about it. And yet I don't see it a lot. It's, I'm really surprised. Why, and I'm curious about your thoughts of voice in the browser and the opportunities there, and maybe a little bit of like why isn't it taking off? 

Nick: Yeah, I robbed her. I've been looking forward to this part of our discussion.

Since we started chatting here and truth be told, I think up until maybe late 20, 19, early 20, 20, I wasn't even considering the web as a viable option for voice technology. And I think that's because, For someone like me who entered as Alexa was booming, Google was booming, big speed was blooming.

And all of these major voices system platforms that are based out of hardware. I feel like you just, you get really excited, you get sucked into that. And of course, when you start designing and developing for those platforms, you just really get sucked into it. It can be challenging to think about something like the lab being a really good place for voice.

And then again, as we started reaching some of those limitations with the major platforms, when I really started thinking like, why is nobody using the lab? That seems like such an obvious. Spot for voice technology because everybody uses it. It can be accessed on any smartphone, desktop, laptop.

We're so used to scrolling, tapping, swiping, and clicking as it is. I'm like, I feel like we were almost trying to jump too quickly with being like, all right, here's the web and mobile that you, everybody has been exposed to for the past 10 years. You're so you're used to that now let's just jump right over into completely voice-based interface.

And that's what I really got thinking about the web application of it. And then of course, thinking through how they would build our own custom digital assistant, it occurred to me that the only real way to do that was doing it on the web with the tools of course that are available now on the tech then to me, yeah.

So that's really when I started thinking about it. And I, of course, I feel like as I really started thinking about that myself was when I started noting seeing some of the other companies and providers in our industry who are leveraging the web again, to the point you just made a bit earlier. It's not a lot.

And I'm still kind of surprised by that, but I have noticed it seems to be catching on a bit more and I've noticed it in discussions that are happening both on and off social media more. But that's really, when I started thinking about the web was late 20, 19, early 20, 20, as we really started thinking about building our own digital assistant.

And, to the question that you asked about why aren't more people doing it, maybe that goes back to the point I just made where I think, everybody's been so used to. Ecosystem of Alexa and Google and Bixby and don't get me wrong. They offer great developer tools.

It's not overly complicated. You can jump right in and build a voice application. You don't need to, build an NLU or even set up an NLU from a different provider and piece it all together. It's all right there for you. And I think, everybody's comfortable with that and are used to that.

And of course, as we know, and this is not a bad thing, it's just natural. Humans are adverse to change soon. They get used to something and you'll get used to using something it's really hard to change the thinking. All right how can I take the skillset that I have with these major platforms and these larger tools and turn that into something for the web?

You know what I mean? So that's why I think maybe there's been some hesitancy. And I also think up to a certain point, I think the industry as a whole has taken a very. A 50,000 foot view employees. And we've been very general with it. And when you start thinking and narrowing down, all right, what is the specific problem that this technology can be used for?

What is the specific outcome that the technology can be used for to better somebody who's listening to you really start narrowing it down from that 50,000 foot view and a, maybe a one-foot view. I think that's also very challenging. It took us a while to get to this specific problem.

I don't, I'll say one of the hardest things about beginning a startup is, really narrowing down your market, really narrowing down the problem you're solving. And I think with voice that's even more challenging because it really can apply to everyone. But I will say when you do narrow it down and you see that Ray of light, that is here's the problem, go solve it with voice technology.

It's a game changer, so I may have gone on a couple tangents there, but hopefully 

Roger: no, I'll, I'll, it's interesting. I think we, sometimes we think you save voice and people think, okay, it's the echo dot. No, no, no, no. That's a piece of hardware providing a voice and you don't think of that voice is an interface and there's a modality and where it could best be used.

And so you think of what we touched type in, swiping on the web. And that's great. And it's really good for that, but you add voice into the mix and it's okay where is it useful? So your product is a great example, handsome hands free, or doing something medical right now.

I can't use my hands are absolutely an awesome use case. I also think about things like searching or ordering something that's complicated. I can put the stringy of their complicated sentence. It's a lot quicker than tapping eight buttons and trying to figure out in the web.

And so, bring that modality in and then use it what it's good for and it augments and enhances the experience. But I do think that we've gotten a little bit, and I think you spot on a device centric mentality. Okay. It's an Ecuador. I talked to it. Okay. It's my phone. I can, I swipe by tapping.

There's a fair number of people that talk to their phones too. I don't talk to my laptop. And I wish I could, because as long as they're not into things, I try to think about it. I'm like, is it faster, quicker, and easier to say this or to type tapping. And sometimes it's benefits or type to, and so, but oftentimes be faster to do that.

My old boss Adam Shire had a Pentagon, a general rule of thumb, if it's onscreen, the UI is brilliant. If it's off screen, think about whether voice is the best way to get to it. And I was like, that's a great way to think of it. Yeah. It's get to the stuff that you don't see that you'll want and his voice, a great modality there.

But yeah, I think there's a bright future in the web, but we got to break out of our, yeah, this is this, this and that. And frankly, a voice is gonna prosper. It needs to be a modality, a way of, I interface with all my devices, 

Nick: I think. Okay. And when it has to, I think right now fit into something that people are relatively familiar with too, because, , rounder one of the ways, well, many ways, but the way that I think about this too, is yeah.

And so you use Alexa, for example, you have Alexa, you open it up, you set it up. So what are you going to start doing? Weather setting, clocks, timers, all that general stuff that, it can do, but , then you go to the skill store and you have, what is it, a hundred thousand different things you can do with this thing?

And you don't even know where to begin. And you have to consciously think about all right, this is what I want to do with it. And you've got to go through these skills on the I'm, by the way, I'm not knocking the stills. I'm just trying to put my, how I view this in a perspective. So you got to go through all these skills and, you have all these different skills for things, our brains already taken more information than I think they were designed for.

This is just adding something else into that. So it creates another hurdle to use the technology for a specific person. And whereas you look at something like, let's use me lab for an example, the lab is built for one purpose. You know, when you're going to that provider website, that the moment you tap on it, it is there specifically to support you taking that at-home diagnostic test or that medical device, it is built for that purpose and that alone.

So that is how I view this as well is, as much as we say humans love choice, I actually tend to think the opposite. And I think the less choice you can have in some instances is better. And that's what we are trying to go for. And that's been my thought process for a while, as we built this for a specific purpose, and nothing against, the skill store and actions and capsules, it's just, it's very general.

And I think it can get very confusing and complicated for people versus an independent use of voice that is built for a specific purpose. 

Roger: It strikes me as you're talking that voice. The beauty of voice is that the machine needs to understand us. We don't need to understand the machine unlike any other way of interfacing with it.

But when I start and having to go to a skill store and add skills, now, then I start going, I start mixing this and I have to think a lot. And the, all that beauty of being able to just naturally ask a device for something within the most natural way to communicate it goes away. And I forget it right.

But if you're in your case, it's, purpose-driven, it's purpose-built, it does the thing and it makes sense, and it's the best way possible for the purpose. And it's a great use case, , you're reminding me, I'm going to share something that blew me away where they use voice.

It was a, I had one of those locks, there's a little keypad to enter and lock and it was a brand new, it broke. And so I got a new one, it was a Yale and things. And then as soon as I put the battery in. It started this voice thing and it wasn't an interface. So you could just, it was just recording.

But the cool thing about it was in order to set those things up, you got to press like 87 different buttons. You know, the code and walk me through, it's okay, choose your code and then press the pound key. And then I press the package and it said, okay, would you like to add another code? Enter one or two?

So it was using voices, they output, but me, the keypad is an input. I thought it was a really interesting mix of it. And it was so much better than some tiny little piece of paper with this. Got all these crazy diagrams and God bless. I don't know who writes these things, but they are engineers not English majors because they're hard to read.

And I was like, wow, that's really cool. Like simple use purpose-built driven thing. Not even something you could talk to cheap thing they embedded in this device. Right. A little speaker that probably costs, a few dollars.

In some chip that isn't there, I'll bet you, they saved monstrous amounts of dollars on customer service 

Nick: calls. Oh, I'm sure. Absolutely. Yeah. That's a really, I liked that use case too. Again, I feel it draws back to that purpose built application of it. And I would not be shocked if we really start seeing more of that in the voice tech industry.

 I think it's coming and I think, it's starting to happen because, I think we've seen being too general. It's kind of what I equated to when we said we wanted to build Alexa skills and Google actions for everybody, when you're trying to help everybody, you can't help anyone.

So I think that trend of really looking at voice. For a specific using it for a specific purpose, a specific application that fits naturally into something people are familiar with such as the lab or, a different modality I think is really starting to maybe make some waves of, and I hope it does because I really view that as the future of this industry.

And , you've been involved in the open voice network, the main thing that we're working on is how do we get this future world of, millions of different voices systems to work together and have all these standards? That's where that organization's thinking and I couldn't agree more.

So, I'm really optimistic I think for what's coming. 

Roger: Yeah. We just need that. We need to open our minds to where, whereas the voice interface and it's not the hockey puck at my desk always. That's correct. The hockey bucket and my desk is awesome. But there's a bunch of other places work.

Useful, and I love to hear the use case.

 Hey, so Nick, I know that one thing that you and I have talked about a little bit is synthetic voices. Oh yes. Voice actors. And it's a bit of a, this is a kind of a hot industry topic, because the synthetic voices get better and better voice actors feel like, oh my God.

It's encroaching on my territory and then we've seen some things where they're taking voice actors voices and can make a synthetic voice out of it and, or is even a thing. And I actually, I think we're talking about tick talk a little bit earlier. Tik TOK had a problem where they took someone's voice without permission.

Anyway, hot industry debate loved it to find out where you are and what your thinking is around synthetic voice versus voice. Yeah, 

Nick: this is definitely a very hot topic right now. And I am very upfront with, I should say, I'm very forthcoming with my opinion on it. And the reason being is because having worked with both prerecorded and synthetic, and now when the capacity that we can work with sympathetic with B lab, the future of voice technology is going to be in synthetic voice.

I mean, I am so confident in that I will fight to the death of that, of course, but I will make my opinion being novel. And the reason I believe that is because of the dynamic nature of it. So say for example, You build a voice application. Let alone, if it's simple or not, that doesn't really matter, but you have, you right.

Dialogue scripts, you have that person prerecorded, it's very static. It's, you know, you can't really change much of that. If you wanted to, with the snap of your fingers, for example, if you needed to add some additional, additional dialogues or you needed to add any additional component, you need to go back to the recording, the voice actor into rerecording, and that just costs more money, more time.

And then one thing I was thinking about is, what, if you invest in a prerecorded voice for your voice assistant, that's meant to be the voice of your brand or your company and the person dies. What are you supposed to do? That person's gone? So I think about a lot of that when I do prerecorded, when we look at it from a pre-recorded voicemail, Versus synthetic where I think we can all agree a couple of years ago.

I mean, even seeing how it's evolved in a few years, it was pretty bad a couple of years ago. I think Alexa's really good if I think back to Syria in 2011, I mean, let's be real serious. Synthetic voice was very robotic. It was not something that you'd shot from the mountains that should be in everything.

But, I feel as we've gotten into now, the late 2010s and now under the early 2020s, there's been so much progress made with synthetic voice. Specifically you look at neural text to speech, voices that are using a complex set of deep learning algorithms to mimic human speech.

And they sound right. Like another human would be. And then you look at the different voice cloning technologies out there that accomplish incredible things. I remember in a talk that I used to give, I would always show the Adobe Voco technology, which they showed off at their annual conference in 2016, where live on stage.

They had recorded Jordan peals voice. The AI was able to mimic it seamlessly and able to cut and paste it like Photoshop and make Jordan pills. Boys say things he never said before. And I thought that was incredible. And then, Microsoft in their conference, one of their conferences in 2019, they had a woman presenting who was using hollow lens.

And she was like, imagine, you needed to be in Japan, but you couldn't fly to Japan, but you've had a meeting that you needed there. Through their neural text-to-speech technology. It was able to listen to her voice in real time, convert it to Japanese. And it sounded just like her voice speaking in Japanese.

And I just I think that's so incredible that we can do that. And I think that truthfully, as we move forward is going to be the expectation because, especially we look at very problem-focused voice applications or voice technologies, you're going to need it to be dynamic and prerecorded voices, not dynamic.

Whereas a synthetic voice is a hundred percent dynamic. Brett tells me that all the time he can go. Mess with one of our synthetic voices at the snap of the fingers and boom, we're done, it's that simple. Or, like I mentioned earlier in our discussion here, we have the ability using the Microsoft custom voice font technology to record and train our own voices, which, I think creates a ton of value for brands.

No matter what brand you are, no matter how big your company is, that's an incredible amount of value to be able to train a custom AI model on a unique voice that now can be dynamically edited in real time for the rest of time. So that's why I'm very bullish on synthetic voice and nothing against folks who choose to use prerecorded.

 It's a personal choice. Everybody does things in their own way, but I am very, very, very foolish on synthetic voice. And I think the future of voice technology in our industry is on synthetic voice and voice cloning. In fact, Brett and I just did an episode for the artificial podcast on this a couple of weeks.

And they were like, how cool of a business idea would it be that, somebody started an online marketplace, synthetic voices where say, you're aligned with a bunch of celebrities, you created voice clones of their voices. They get a royalty when that is used and anybody can go and buy it and license from the marketplace it's taken in their voice application.


Roger: cool would that be to be able to talk to 

Nick: like Arnold Schwartzenegger in real time, like how cool would that be? So that's where my mind is. And that's why I'm incredibly focused and bullish on synthetic voice. 

Roger: I agree. I think, it's because it's come so far and I think you're right, the voices are not only the pure synthetic voices are much better.

But then that voice cloning stuff is really pretty amazing. And I, what a great idea, get a bunch of celebrities, you knew that they had Alexa now can be Shaquille. And you can say, and now it'll wake up. It's Hoff.

If I said, Hey, Shaq, Alexa wakes up. I 

Nick: actually don't know. And I did it, no shack, but I have Samuel Jackson. 

Roger: Oh, I have both of them. I like it just it entertains me, right? The same Samuel Jackson is great too, because they audit a little personality in there which is very 

Nick: good and he can be explicit.

And I actually paid 99 cents for the explicit version. I made sure the moment that came out to do that. 

Roger: Absolutely, that voice cleaning stuff is really amazing. What it can do, but I wanna, I want to ask, I'm going to give, I'm going to take the counterpoint because this is what I've heard.

And I got it. I think there's a point here is those voices sound really good, but you feed them some content and they don't understand the. And so if I'm reading some content to you, I understand it. I may be much more emotive around it. You know, raise my voice, lower my voice, stop, pause.

That kind of things that will make the content, the emotional aspect of that written word now comes through in my voice. I have not seen yet to date and maybe people are doing this cause AI could probably look at it. Something that can replicate that that's to my mind, what you still get with a voice actor is the emotive understanding of the content.

And then the appropriate kind of I'll call it human. 

Nick: For lack of, yeah, no that's actually, , I really liked that point. You just raised and that's something that I actually never even thought about, again, rabbit holing, unboxing itself and the synthetic voice part of things, but no, you're right, because you have no, , synthetic voices reading back that content.

I mean, you can only get so detailed with like adding us a, some old tags and some of those other things that adjust the pitch, but you still have to use your own, humanness of understanding context and the world around you in order to do that, even with us the smell, cause I'm prerecorded place, that's just a natural human thing.

Where we look at a piece of content we understand that context and the time and the place and all those different things that the machine just can't yet. So no, I think that's a really good point. I think you're spot 

Roger: on with that. Yeah, it'll be interesting. there's no reason.

We have AI that can start to understand some content. So there's no reason that can't be brought in, but to my mind, that's one of the next big aside from the voices, just getting better and better in the voice, cloning getting almost crazily almost too good. . The fakes and things, but uh, is starting to actually look at the text, extract the meaning, and then, dynamically changing, pauses and, and breaths and everything that we do as humans.

I think that, that is I think that's going to keep voice actors employed for a while longer, actually 

Nick: for 

Roger: long form content, particularly. Yeah. I think short things work really well long form. I think you still, it still works, but I would not want to listen to a TTS, read a book aloud.

Nick: No, I can't imagine sitting there listening to Harry Potter and the Sorcerer's stone was right back to you. 

Roger: , when and synthetic gets good enough to do that, then voice actors need to start worrying. Yeah. To start getting good there. So just jump really big question. And you touched on this a little bit, but voice and AI and your excitement you're dedicating your career to it, and you're doing it. What is it that gets you so excited about voice and conversational AI and why you started a company pivoted to it and continue to innovate in it and really amazing 

Nick: ways. No, that's a great question, Roger. And, I mentioned earlier that, of course, what we're doing now is predicated on a very personal experience I had as a kid and knowing that the technology serves as a really good tool that can be used to solve that problem.

But I think, it really stems just from that first interaction with Alexa where, it's just so much easier to get things done by verbally communicating. I mean, I can't tell you, I mean, you and I both, I'm sure I've had this experience countless times where you're texting somebody or writing an email.

So many things get lost over written communication because you don't have. The emotion part of it and just, , you can speak so much quicker. With our technology, for, since things really gotten going, since the sixties and seventies, it's been very much a, an analog typing and, then we got to swiping or clicking and then swiping, I view this as very analog ways of commuting indicating with technology, and, the primary way that humans communicate with one another is speech.

So it only seems natural to me that the next evolution of how we interact with our technology that is becoming more and more human, like it feels like is to be able to talk to it in a way that I, you know, that we're talking right now or that, Tony stark talks with Jarvis and the Marvel movies, I really do think we'll have something like that in the future a hundred percent.

So that's what keeps me going. It's just realizing that this is a truly unique. A truly unique and humanistic way to interact with technology in ways that we have never been able to do before. And I think, again, applied to various specific problems, it's going to lead to, so many different discoveries and so many different ways of doing things that we can't even predict now, I mean, I truthfully feel like, hopefully when we get to the first man mission to Mars that hopefully they have an onboard computer that they can talk with.

I mean, wouldn't that just be the icing on the cake, right? When you look at all the Saifai movies and how they get to different planets and stuff. So, yeah, I just view this as just the next. Interface stuff with technology, I guess, up until we can implant things on our brains, but Geolog Musk is already working on, of course.

But that's what keeps me going. It's knowing that it can be used to solve really specific problems better than any other way before. And the opportunities to really advance our interactions with technology and improves people's lives is what keeps me going. 

Roger: Kudos. I just saw hi. So agreed with everything you were saying.

It's just, it's such an exciting time and we're so 

Nick: early. Yeah. And I, and that's the thing it's like to those of us who've been in it even just for a few years, it feels like this is, how could it be so early, but then, step out of the bubble and you look at the world around you and where everything is.

And people are like, oh, there are skills for Alexa. And it brings you back down to earth. And you're like, yeah, this is still very early. It still is. 

Roger: And so there's just this Greenfield of options. In front of us and things in the eye, it says, there's you Artic articulate a better than I could.

Why I'm in this industry is exactly that same reason. It's just the opportunity and the excitement and we're healing. He being more human with the way we communicate with our technology. So, amen. 

Nick: One of the, sometimes I hear concern for people like why are you trying to build a, a voice technology when the tech companies probably can just do it themselves?

And I'm like, well, I'm like, what's interesting about any business is the larger they get the harder it actually is to get things done. So if there was ever a time to hop into the voice tech space and start working and building something, I think now is the best time. Because as a younger company, as a smaller company, you can be really nimble and do things that the bigger companies probably just can't even, can't even begin to start working on because of all the corporate red tape and just all the different hurdles that comes along with being a massive company, you know, 

Roger: oh, innovation comes from small companies and we've seen that over and over and over again, right.

The big cup at some point, a company gets big enough that it's really hard for them to innovate. . Hey, Nick, one question I love to ask. And I call this kind of mummy hidden gem question is what's a voice experience. Good voice experience could be a skill that action, something independent.

What you're working on could be a podcast that you love, but some something voice related that you really love and enjoy, but you don't think enough people know about. 

Nick: That's a good question, I'll be honest. My use of Alexa skills and Google actions has dropped substantially. But one of the things I will say that I have been using more, and this is probably just because I recently purchased a duplex is I'm on home Depot's app a lot and they have a voice functionality within their app credibly.

Well done. Yeah, 

Roger: they do very, wow. I got to check this out because home Depot is but you got to type in to find things in the store. Does it 

Nick: first of all, on right by the search bar. And you can basically navigate the whole app and find whatever you're looking for. It gives you the aisle number, everything.

Yeah. It's pretty 

Roger: neat. Oh, wow. We did some home remodeling and I ended up in home Depot a fair amount. And so I'm going to, I'm going to have to check that out. Oh, very cool. Now I love this cause this is on. I'm going to go try. And 

Nick: that's really cool. 

Roger: Perfect. You're in the story. You need to find it. And do you want to take in and tap in the, you know, blah, blah, blah, blah, blah, blah, blah, lots of taps. Or just ask it used to be 37 in aisle 23. And you're like, okay, now I'm in aisle one. So it'll be about a half mile, , walk to get to aisle 22, but that's home Depot for you, 

Nick: right?

That's the thing is I've used at both in store and I've used it even just navigating the app to buy stuff online. It just, it's a much cleaner, easier experience, especially again, to your point about home Depot, where there's just so much and voice helps you navigate that. So yeah. To anybody listening, definitely check out the home Depot app.

If you go to home Depot, try out their voice assistant. Awesome. Awesome. 

Roger: All right. Crystal ball time. I give you the easy one and then the harder one, the easier one, or what's your prediction next year or two in the voice. 

Nick: I think the next year or two, we are going to see a lot more people jumping into the web-based independent voice assistant bandwagon.

I really do. Really just based on a lot of what we talked about in this episode here, but, I think as more use cases start popping up of, voice assistants being web-based that are used to solve specific problems. I think people are really going to notice that and say, Hey, maybe I don't need to kind of be out here in left field, focusing on, 1,000,001 general different things I can do with voice.

Maybe we can actually build something that solves a specific problem. We can leverage the web doing that. I really think you're going to start seeing that a lot over the next two to three years, and I think. Especially as standards come into play more through the work, the open voice network is doing and other orgs, I think we're going to see a lot more enterprises building or buying ways to create an independent voice or digital assistant.

I really do. 

Roger: Awesome. Awesome. I like that. All right. So now the walk a lot further out in the limb question, five or 10 years from now, and I'll tell you, I'm going to have to get to we'll get together in five or 10 years and laugh over, over whatever predict all that prediction is wrong, but a five or 10 years out rub your crystal ball, or where are we going to be?


Nick: I'm going to be really optimistic here, but given how quickly technology has evolved over the last 10 years, I'm going to go ahead and stop. That if it's not one of the larger platforms, I really do think there is going to be a company that creates something similar to Jarvis in iron man.

I really do. And I think it's going to have a level of intelligence that we can't even predict yet, because I really think that's where AI is headed. I'm not sure we'll ever create a true sentience AI that can think like you or I can, but I do think we'll get good enough to where we can train it on enough data.

It is able to form and recognize patterns, very similarly to humans and accomplish tasks in that manner. So I really do think 2030, the very well could have a Jarvis like voice assistant that understands context in a way that current voice technology just cannot do, because context is key, right?

I walk into my smart home and because everything's connected, whatever the voice assistant may be knows that I have a flight coming up in three days. So it's already scheduled an appointment for my dry cleaning. And because the car is connected with the internet, the car gets sent off to get an oil, you know, all this stuff that's interconnected.

I think the voice system will be the hub of that. So I may be too optimistic, but I would love that. That's what I want. Awesome. 

Roger: Awesome. So you need, we need Jarvis, Jarvis becomes reality. And the really interesting thing about Jarvis too. It's not. So, there's always been this thought of, you know, the turn test, and Jarvis wouldn't pass the term. Like you didn't think it was a human, right. You knew it was a bot, but it was super smart. Right. And so I almost think the turning test is interesting academically, but I don't care if I'm talking to a bot. I just want that bot to understand. And be able to do things and I probably will talk to it a little bit differently or than I would a human and that's okay.

But yeah, I love that. I love that prediction. I would like to see. I think, I think Jarvis inspired a lot of people in industry. 

Nick: I would agree because I we look at IOT today. I mean, by 2030, I can't even imagine, you know, I can't even imagine how many more devices are going to be connected to the internet sharing information.

And I mean, it's, to me, it's just the natural progression of where things are headed. Especially if Amazon starts delivering packages via drone. 2030 is going to be really interesting. 

Roger: Cool. Let's have the roaring twenties let's figure out code, right? Yes. 

And then have this roaring twenties of amazing things happen. That's what I am and hoping and optimistic for, Hey Nick, if people want to keep in touch with you or what you're doing, what red Fox AI is doing, what's the best way to do so? 

Nick: Sure. So, email is a great way. So Nick dot Myers, I have the uncommon spelling.

Nick dot M Y E R S at red Fox stash, always feel free to send me an email. I'm active on social media. Of course, used to be a bit more active as we were kind of talking about. I'm trying to get that gain to that, but on Twitter, you can look me up at my handle at the red Nick M connect with me on LinkedIn, just search for Nick Myers.

I'm one of the few that have red hair social will be hard to find Yeah. I'm pretty open when it comes to people reaching out to me and wanting to chat always up for a conversation. So 

Roger: great. Yeah. And I'll make sure to put those in the show notes. Hey Nick, it has been a pleasure. I was excited when I hear about new your new product and now I move in more excited about it.

So I love conversations when you leave them. You're like, wow. I have more questions and I'm more excited than when I started. 

Nick: Thank you, Roger. I've been looking forward to our conversation too, and thank you for extending the offer. Cause I, I, haven't done a whole lot of promotion about what we've been working on, so I really appreciate that.

Giving me the chance to come on there to have a great chat with them. Oh, 

Roger: absolutely. Well, it's been super fun. Thank you so much. And that's all folks till next time. This is Roger Kibby at the Bixby developers chat podcast signing off.