Digital Transformation & AI
- Published
- May 7, 2023
- Share
In this kick-off episode, we discuss how digital transformation and AI are changing the business as we know it operates.
Transcript
AR:
Jason, welcome to the first episode of Tech Tonic by EisnerAmper. Thanks for being here and thanks for being our very first guest. So today, we're going to talk about everything related to AI. It's been a very hot topic, and ChatGPT has been kind of leading with, in five days, I think 100 million users or something. So a lot of people have been asking us about it. I think you're one of the best people to talk about it. Why don't you give us an introduction on yourself, and your experience, and how you use it today with your existing clients or prospects?
Jason Juliano:
Yeah. No, I appreciate it, and thank you for having me. So I'm Jason Juliano. I'm the national practice leader for EisnerAmper Digital Transformation. I'm also the chairperson for the CompTIA AI Leaders Council, but I've been working with AI now for over 10 years.
In terms of ChatGPT, it's the most traction I've seen in the last 10 years in AI. I think the flexibility and market play that they basically provided everyone ChatGPT to use and try out... I think that's how they got so much traffic to begin with.
AR:
I don't even think they were expecting that kind of traction so quickly, right?
JJ:
Yeah, yeah. And the website's been down a few times, especially in the early months of November and December when they first pushed out with the GPT3 playground. But now, it is leveraging generative AI, so it's taking a mix, and people are looking at use cases and how to integrate it with conversational AI and digital assistants, and then they're leveraging it to do deep fakes. So, yeah, a lot of great use cases.
AR:
Yeah. There's just wild stuff out there in terms of what people have been able to come up with, which is really exciting and also sometimes terrifying. So let's start with... I'm sure the audience has an idea at this point just following the news, but from a high level, how does it work exactly? Because as you said, AI has existed for a long time, so why now are we reaching, I guess, the turning point or the climax in this story or this journey for AI?
JJ:
So I think people now have the technology in their hands. Everyone does. So it gives them the opportunity to be much more creative, think of different use cases on how to use it. They've been testing it. Yeah, as you know, with innovation, you basically test, and whatever doesn't work. You test something else out and see if that works. So I think that gave them the opportunity to look at different use cases.
Students have been using it now. Yeah, I'm an adjunct professor. We were talking about how to enable students to still use ChatGPT, but in the same token, make sure that we have plagiarism tools that look at AI-generated content. So if we see that the content is 30% or more AI-generated, give the kid zero. But in the same token, allow them to leverage the tool to use it as a learning tool to help them be more creative and think about different solutions.
Aimann Rasheed:
It reminds me of the calculator. When it first came out, they were hesitant to allow students to use it, but then realized that it's not about being able to do complex competition in your head. It's about using your judgment to solve a problem. And part of that is knowing how to use calculator in the right way. So that's how I see AI.
There's a lot of different kinds of AI. So ChatGPT is what they call a large language model. I mean, I tried to get in there and understand it from a technical perspective. There's a lot. I mean, if you really get into it, it's very complex how it works. The objective is to mimic the general intelligence of the human brain and all of that.
But can you tell us a little about the different kinds of models? So is the same AI that talks to you... Is it the same AI that generates the imagery like DALL·E, or are they completely different, trained differently, and all of that?
JJ:
No, it's the same model. ChatGPT is using the GPT-3 model, which uses transformational AI, generative AI, but it's related to text. It's related to images. Basically, taking data that it's learned from and responding to the intent that it's receiving. So from an image creation perspective, you say, "I want to create a teddy bear floating in space 100,000 feet up high from planet Earth." So it'll grab what it understands about teddy bears, and space, and Earth, and cute teddy bears, and then-
AR:
Then when it generates the imagery, how does it construct it? Because one is turning into words and getting the understanding in a sense. And then the step after that is turning it into data that can be reviewed, right? So-
JJ:
So that's where the training... So with a AI, when you first create the model, you're basically training a toddler. Toddler's a sponge, just grabbing information that is received, the look, the feel. And then the response is ending up saying, "Yeah, this is a bad response," for instance. And it'll learn from that saying, "Okay. Well, next time, I'll give you other output." So that's how it eventually learns. I mean, I've been using it since it came out, and some of those images are kind of whacked, but some of them have evolved.
AR:
Right, surprising.
JJ:
Yeah, exactly.
AR:
Right. I mean, I think one of the tipping points was... I don't know if you've heard of the Lensa app, where people were basically uploading profile shots of their basically selfies, and then it would generate an anime version of them, like an anime caricature. I'm sure you've seen people use that all the time. It was a really good example of what AI can do, and it also kind of made it really simple for them to understand how it works and the power of AI. So that was really cool to see.
And then I think ChatGPT... I mean, it's been around forever, but I think after that was when it really started to take off because people got more interested in AI. But it also raised some interesting things about legal because Lensa will take existing art from people without crediting them, for example, and then it'll generate something brand new.
So if you as an artist were to draw a painting, and you were to look at the Mona Lisa, and then draw it yourself or paint it or whatever, you're protected. Now, is AI also protected, and does that also extend into videos? It's just such a new thing. Have you heard anything about what the industry's trying to do about all this?
JJ:
We chatted about this a little while ago. I mean, we could go really deep in rabbit hole with this, but I think everyone's trying to get a handle with it, the attorneys, the judges, the creators, the content creators. What's art? What's not art? I don't know what the future has in story for us. Because if you look at artists, a artist gets their inspiration from what's around them too.
AR:
Right. Because nothing is truly original.
JJ:
Exactly. So that's what, quite frankly, AI is doing. So-
AR:
Just doing it faster than you could do it.
JJ:
Yeah, exactly. Right. And as a human being, I could consider myself as an artist if I leverage AI as a tool to help me because I still have to feed in information, right?
AR:
Right, and that's [...] maybe. Yeah, right.
JJ:
Yeah.
AR:
That's true. So I guess we'll have to see. I mean, there's going to be probably a lot of court cases, like precedent set that'll challenge how people...
JJ:
Yeah, and some of them are going to be right. And some of them, quite frankly, will be wrong.
AR:
Right. I don't know if you can really stop people from doing it. I guess this would only apply to commercial entities or something like that.
JJ:
Yeah. And then ethics is going to be a big portion in that. What's right? What's wrong? There's going to even be a very gray area.
AR:
Right. Yeah. So speaking of which, being they've... I would say the first ones that have on a large scale implemented AI, ChatGPT. I heard they're using not 3.5, but one that's a little bit more fine-tuned. Obviously, they have to put in some guardrails. So when it comes to ethics, it's interesting because some people are upset because they want AI to be completely neutral. Whereas a commercial entity wants it to be, for example, a positive experience at the expense of potentially being objective, right?
JJ:
Right.
AR:
So how do you see that playing out? I mean, a lot of people are complaining that it's biased or leaning politically in one way or the other. Obviously, Microsoft is not intending to promote bias, but inherently, there's always going to be some bias if you're going to add guardrails to it. So what do you think about how that's all playing out?
JJ:
I think even that adding the guardrails to it, you may get some bias in there based on the content that it's learning from.
AR:
That's true.
JJ:
And you're not doing it intentionally.
AR:
Because AI is not passing judgment.
JJ:
Exactly.
AR:
So if 80% of the facts are incorrect, it's going to say, "Well, these are the facts."
JJ:
Yeah. It's going to give you misinformation.
AR:
So in a way, AI is basically the best and worst of humanity distilled in a soulless kind of tool that you can tap into the collective conscience in a way to get an understanding of what people believe, but it won't necessarily tell you what's right or wrong.
JJ:
Right. Now, the challenge with Microsoft is that they're going to be leveraging it to commercialize it. So they have a responsibility to make sure that there's a governance model in place to make corrections as they see the output of the OpenAI integrated into Microsoft ecosystem is correct.
Right now, you mentioned that they're basically layering their secret sauce on top of GPT-3.5, but we all know GPT-4 is coming out this year, and Google is introducing Google Sparrow and Bard. When that happens, they're saying that could be ChatGPT on steroids. So we'll see what happens with that, right?
AR:
I imagine, just predicting the future or trying to, you'll have an AI that will evaluate AI for their bias and expose all of them in a way.
JJ:
It's been happening already. I mean, we got AI models that teach other AI models. It's a big use case in financial services, where they've created AI governance model that basically helps train other AI models. Look, we're testing it. We've created our digital assistants with Maya and Faith, and we're teaching it information from accounting advisory firm to understand what we do, understand all about our clients, create a digital concierge assistant internally to help our employees. I mean, another five years from now, yeah, I believe every worker when they log in in the morning, they're going to have their little digital assistant helping them basically work on their tasks throughout the day.
AR:
Yeah, that makes sense. So can you tell me more about some of these practical applications, especially in terms of the platforms out there? I mean, I know Amazon has SageMaker, and they have things built on top of their analytics tools that have AI incorporated, but it doesn't seem like a whole lot of people have been using them just yet. Or maybe they are and people just don't notice because it's that streamlined, but-
JJ:
It's the ladder. Yeah.
AR:
What was that?
JJ:
It's the ladder. People are not noticing.
AR:
Yeah, and I guess-
JJ:
It's streamlined and integrated into their-
AR:
That's how I think it's good, right? It's when you don't notice it. Because if you noticed it, then it stopped passing the... What is it? The Turing test?
JJ:
Yeah. Yeah, the USAA Bank, they've been using IBM Watson Virtual Assistant for years now. So when you actually talk to the digital agent, you're talking to IBM Watson.
AR:
Really?
JJ:
And it's having that conversation with you. It's connected to their banking platform, and it's totally integrated into their workflow.
AR:
Wow. Wow. I think that's probably one of the best use cases. If someone wants a personalized assistant to help them through a new piece of software, it makes sense that an AI that's been trained on the software can just answer literally any question they have. So I can see that. What are some of the more exciting use cases that you've seen, either with your own clients or just in general?
JJ:
I mean, from a personal perspective, it's definitely that content creation around video and images. It's just crazy what these AI models come up.
AR:
Yeah, I've seen someone on YouTube or maybe on LinkedIn. He basically recorded a video and he had the AI swap out the name that instead of saying, "Hi, Jason," it would say, "Hi, John. Hi, Bob. Hi, Mary," whatever. That way, he can make it seem like he's personalizing every single video that goes out, but it's not personalized.
It's really interesting to kind of try to navigate that from a ethical perspective, not that it's unethical, but I think in the future, it seems like people will probably have to change the way they see a video like that. If they're completely fooled, they still have to be skeptical. And that makes me wonder, is that good or bad for society that we now have to be basically question everything, even if it seems completely legitimate. And maybe we have to go back to being face-to-face all the time.
JJ:
I mean, the first time I saw a deep fake was that Obama deep fake they did a couple years ago, right? And-
AR:
Yeah, it was pretty good. Yeah.
JJ:
It was definitely pretty impressive.
AR:
Have you seen... Lately, they've been doing deep fakes where all the previous presidents are play Minecraft or chatting together. It's like Biden's yelling at Obama, and Trump is like, "You guys are idiots." They just banter. It is really entertaining because it sounds like Biden, Trump, and Obama.
JJ:
Yeah. I mean, and you're using different pieces of AI. So you use a speech-to-text natural language processing to record your voice. And then you layer in this generative AI model to understand what you're saying and incorporate that speech-to-text into that imagery. And it also incorporates the facial movements and the hand moves that we're doing right now. That's when it becomes so realistic because this is what normal humans do from a day-to-day perspective.
AR:
I mean, sky's the limit, obviously, but how far can we get with this technology? I mean, one of the predictions I came up with is that within 10 years... And this is crazy. Within 10 years, I think it'll be possible for every person to have their own personalized assistant that has all the context within their life kind of embedded in the knowledge that your personal AI is trained on.
It'll be able to interject and maybe using AR, it'll like show you things, almost like your buddies right next to you. It can be as simple as having an extra microphone camera. It sees and hears what you see and hear. On the fly, it's basically processing the world around you and maybe even doing facial recognition. So I could be talking to somebody, and it can say, "Hey, this guy seems anxious." It can tell you afterwards, "This guy seemed anxious at the 30-second mark of your conversation," which is crazy to think about, but it's entirely possible.
JJ:
The technology exists today.
AR:
It does.
JJ:
I mean, Google goggles, they didn't make it not because of the technology. They didn't make it because privacy issues. They didn't know when someone was recording them through their goggles or I was on a call. I have the Bose. What's it? The Bluetooth Bose with the mic and the headset that's built in. I was having a phone conversation. I was next to folks, and they were looking at me, "Are you talking to me?" And then Ray-Ban did this recently with Facebook and Oculus where-
AR:
And it looks pretty good.
JJ:
Yeah. I mean, what they did to try to limit the privacy issues, they created a small little light. So when you're recording-
AR:
It shows.
JJ:
... light shows. So they're hoping that'll change everyone's perception. The technology's here today. I could create a digital twin of myself, basically teach it, basically talk to it. I could journalize through this model and tell them my persona. I could give them my persona. Then they could understand who I am as a human being and quite frankly create a avatar. If I were to leave this Earth-
AR:
Where you are on the Myers-Briggs personality.
JJ:
Yeah, exactly. My kids could talk to this thing forever.
AR:
Right. Yeah. That's interesting that you raised children because I'm... Well, not raise children, but bring up the topic. Imagine someone passing away prematurely, but they have a version of their dad or a version of their mom that can guide them through life. Let's say they're 10 years old and... Well, I think there was a... Was it Black Mirror? You know that show?
JJ:
Yeah.
AR:
I think there was an episode on that-
JJ:
I think so, yeah.
AR:
... where a wife passed away or something, and the guy was talking to her on a phone. It's crazy to think what's possible.
JJ:
Yeah. I mean, like I said, the technology's already here. Some of the technology isn't great. It's like a DALL·E, right? It's learning.
AR:
And some of it is CPU bottlenecks that will-
JJ:
And some people just don't know it exists today.
AR:
That's true. True.
JJ:
As a firm, our job is to be the storytellers and look at what our clients are going through and how we could leverage-
AR:
To define the different solutions.
JJ:
Yeah, and leveraging all these emerging technology solutions to solve business problems, right? So it could be anything.
AR:
Yeah. I mean, there's so many startups now, and it's funny. There's a joke or a meme on LinkedIn about AI startups being like, "Oh, I implemented the API for ChatGPT and called it startup." But it is really that simple. Sometimes you just need a good idea.
But there's so many of them now. How do you keep up? A lot of people have asked me like, "It's too much. It's overwhelming. I don't even know what's possible. I know that AI could help me in 500 things throughout my day, but signing up for 500 services is overwhelming."
It seems to me, and I would love to know your take, that the end game is that you're going to have a roll-up, and it's going to be basically office-related stuff, tasks, and automations, and assisting and then personal-level stuff. Then there will be suites. The next Microsoft Office 365 will be AI Personal 365, and then AI Work 360. You know what I mean? Something like that. But what do you think of how the landscape is kind of looking?
JJ:
I think this use case could be used by literally everyone. I'm actually writing a book. I've been writing a book for, I don't know, a long time now. I have couple of chapters, but writer's block. I got stuck. I think I created five chapters already. I fed it in through ChatGPT-3 in GPT-3 playground-
AR:
All five chapters?
JJ:
All five chapters. So it knows what I'm writing about. And I said, "Can you give me a relative outline of future chapters based on the content there?" So I prompted it. It gave me 30 different chapters and ideas I could write on, which from there, I wrote additional two chapters, and so on, so forth. So I used it as a digital assistant to help me just be much more creative. So people that are creating content, people that want to send a formal email can leverage these tools. Just prompt it, "Hey, I'm looking to send an email to ABC person, and this is what I'm trying to get out of it. Can you help me create this email"?"
AR:
It's funny you mention that. On Reddit, I saw someone's used ChatGPT to take a passive-aggressive email they wrote and to make it... Basically, they told it to remove the passive aggression from this email. They were, I guess, upset at somebody, but they wanted to say the same things without it seeming aggressive. I was floored because it's like, yeah, it makes sense, but imagine having to clear your head, take your emotion out of it, and then sit there for an hour revising words. This thing did it in seconds, and it's so useful.
One thing that's crazy is I was talking to a therapist, and they were talking about how in the future, they expect AI to take a string of videos of a child as they develop and be able to say, as a diagnostic tool, what kind of issues they have from a developmental perspective. Then it'll assist the parents in providing activities for the child to learn more skills, like patience or stuff like that. Or it can even diagnose people who have deep-seated emotional trauma.
If your AI is with you from birth... And I know this sounds dystopian, but imagine your two-year-old now has an AI assistant that's always on them and keeps them safe too, because there's predators. And sometimes it's very nuanced, like who do you trust and who do you not trust. It's kind of weird to think that everyone's going to have this level of personal surveillance, but I don't know. It's kind of anarchist in a way, but it makes sense that that's the end goal because of how useful it'll be for each individual person that leverages it.
JJ:
Yeah. Then the conversation will come into play about privacy, right? Because now they know too much about you.
AR:
Right. Well, it's your AI. Is it okay that it knows too much about you?
JJ:
But we have to make sure that it's protected and that no one could get to it because this is setting so-
AR:
I mean, that's a level... but-
JJ:
It is emotion. You could really capture it.
AR:
You know what's crazy? I mean, Facebook came in the middle of our lives for us, but there were some people who are born with it. For us, privacy was a big thing. And I'm sure it's still a big thing for a lot of people, but every substitute generation seems to be more and more willing to give up privacy for convenience. It seems that way, but-
JJ:
So I'll take a devil's advocate approach. So what if you introduce in a government social credit score? Have you heard what China's doing?
AR:
I haven't. I've heard of the concept, but-
JJ:
So you could take that information and provide, "I don't think your views and the way you're living your life is up to par to our citizens. So I'm going to give you a lower social credit score, and-
AR:
This is the Chinese government doing this?
JJ:
Yeah, yeah. It's been in existence for a while, maybe 10 years.
AR:
Basically, they're taking data from social media and stuff?
JJ:
Yeah, if you cross the street at a red light, your social credit score will go down.
AR:
Using facial recognition?
JJ:
Yeah.
AR:
Oh, wow.
JJ:
That exists today.
AR:
That exists today in China?
JJ:
In China, yeah.
AR:
That's wild.
JJ:
Yeah. So we have to be careful with stuff like that, right?
AR:
Right. Because on a personal level, if that data does get out, that's one thing. If the government's eventually like, "Well, you have no choice now. Your data's plugged into our systems." Then I'd say now it's a whole new ballpark to play.
JJ:
Have you watched Minority Report? So-
AR:
I love that movie.
JJ:
Yeah. So you can actually create predictive analytics based on that and predict if the guy is going to or the girl is going to break the law just based on their patterns.
AR:
Yeah, that's interesting. I guess when I saw it, I never thought... But the technology is there to some degree. It might not be 100%, but neither-
JJ:
It depends on the intent of whoever uses it and if they're power enough to use it.
AR:
That's why I feel like AI is simply an extension of human consciousness. So if we want to use it for good or evil, it's going to happen. And the tools that we create to protect ourselves from AI and also help further our own kind of goals to all of it within our power. So any problems or solutions that come from it is merely an extension of our own will in a sense. So that's why I worry about AI, but not because of AI, but because of how people will...
JJ:
Yeah, it's the ethics behind it. In this conversation of AI, it is very easy to go into a rabbit hole, and we're going to have debates till the end of time related to AI.
AR:
Yeah, I agree. I think that, I don't know, the next five to 10 years are going to be interesting and probably very difficult for people to manage. There's going to be theft. There's going to be next-level scamming.
I wonder if CAPTCHA is going to work anymore. And imagine if CAPTCHA doesn't work anymore, if it's broken by AI. People are going to be able to influence public discourse by just engaging in a conversation with the wrong facts or something sounds correct. I mean, AI today is not always accurate. Actually, I have something funny I want to share with you.
JJ:
Okay.
AR:
Okay. So someone asked ChatGPT... They pasted a piece of code and asked for a solution, or they prompted it for a piece of code that did a specific task. Then ChatGPT came up with a bunch of code, and then the person said, "Your solution gives me an error. Please fix your solution." And then ChatGPT apologized for the confusion. "Here's an updated solution that should resolve the error."
And then the person said, "What did you change in your last solution in comparison to the solution you sent before?" ChatGPT responded with, "In my last solution, I provided the exact same code as the previous solution, but made an apology for the confusion in my previous answers." So it's like who knows what comes out of this thing?
JJ:
Yeah, absolutely.
AR:
So people, I guess, have to just be careful, but that doesn't mean that there isn't real power, real leverage that companies today can use in real circumstances. For example, data is a big one, right?
JJ:
Yeah.
AR:
So we do have to wrap up, but I wanted to ask you, where would a company start if they wanted to explore what's possible with AI? Because I think one of the issues is the unknown unknown. They don't know what they can do with the data they have or with their current staff or whatever it is. So where would they start?
JJ:
I mean, quite frankly, they could reach out to us. So we do a free AI journey workshop for an hour or a little longer to understand their business and how we could leverage some of these AI tools, and incorporate them in their business models, into their workflows, and see what works, what doesn't work, what could provide the most value.
AR:
Basically, you do an observational assessment where you look at their processes, et cetera, and it just-
JJ:
Understands their business.
AR:
Right, and what their missions are, make sure the two align, and then find different AI tools that can be incorporated, and then do the implementation, and monitoring, and all that stuff.
JJ:
Yeah, absolutely.
AR:
And that's something you feel like there's a lot more interest now in that?
JJ:
Oh, yeah. I mean, I've been getting pinged at least every day on, one, "Have you heard of ChatGPT?" Yeah, I get a message every day. "How can you get ChatGPT to help us and incorporate it into our business or our industry?"
AR:
Right. Okay. Well, I mean, this has been an awesome conversation. I love talking theory, ethics, and everything related to emerging technology, so thanks for giving your insight. Where can people find you and the kind of different programs that you guys offer?
JJ:
Yeah. So you could find our entire team at eisneramper.com/eadigital. Then, yeah, once you go on the website, you'll see the different teams that we have and a way to reach out to us directly.
But myself, you could just type Jason Juliano on Google, and you'll find me everywhere.
AR:
Cool. Yeah, you can't hide these days, right?
JJ:
Yeah, yeah, unfortunately.
AR:
All right. Well, thanks again. I appreciate your time, and hopefully, we'll talk again soon.
JJ:
All right. Thank you for the time. Appreciate it.
Transcribed by Rev.com
Also Available On
TechTonic
TechTonic is a podcast series that guides listeners through seemingly complex topics relating to technology and their use cases.
Contact EisnerAmper
If you have any questions, we'd like to hear from you.
Receive the latest business insights, analysis, and perspectives from EisnerAmper professionals.