302. Is AI a Friend or Foe? Can We Get Ahead of It??

Artificial Intelligence (AI) has the potential to bring significant benefits and advancements to various fields, including healthcare, transportation, education, and more. It can automate tedious tasks, improve efficiency, and enhance decision-making processes. However, AI also presents challenges and risks that need to be addressed. It is crucial to promote responsible AI development, ethical guidelines, and appropriate regulations to harness the benefits of AI while mitigating its risks. In this episode, we delve into the pros and cons of AI and unpack the anxiety and interest surrounding it. We discuss the concept of emergence, its impact on the job market, using AI as a tool, and ethical considerations. We also talk about the challenges and limitations of AI, whether human involvement will be needed in the future, and much more. Join us as we navigate the twists and turns of this cutting-edge technology and separate fact from fiction!


Key Points From This Episode:

  • Discussion on the fascination and uncertainty surrounding AI.
  • Concerns about ‘emergence’ and AI learning to do unexpected tasks.
  • The potential impact on content creation and editing jobs.
  • Unpacking AI as a tool for developers.
  • Examples of leveraging AI in the workplace.
  • Challenges of using AI in generating content and the need for evaluation.
  • Whether chatbots and large language models (LLMs) are customizable.
  • Highlights of AI's limitations and the importance of human involvement.
  • And much more!

Transcript for Episode 302. Is AI a Friend of Foe? Can We Get Ahead of It??

[INTRODUCTION]

[00:00:01] MN: Hello, welcome to The Rabbit Hole: The Definitive Developers Podcast, living large in New York. I'm your host, Michael Nuñez, and I have friends with me, people. We’ll start with Dan Mason.

[DISCUSSION]

[00:00:12] DM: Hey, how's it going?

[00:00:12] MN: Melissa Wahnish.

[00:00:14] MW: Hello, again.

[00:00:15] MN: And Stevie Oberg.

[00:00:16] SO: Hello.

[00:00:16] MN: Today's topic, we'll be talking about is AI a friend or foe? Can we get ahead of it? I can assure you, folks, if you're listening, that I'm not talking to robots, I have real friends and co-workers who are here with me who want to talk with me about AI, which is great. I imagine if you're listening, you may have had an opportunity to work with some form of chatbot, or some kind of AI tool that exists out there. I brought these lovely folks to ask the question. Should I be terrified of AI? Am I at risk as a human, person in the workforce? Dan, what would your answer be if someone said, “Hey, are the robots going to take our lives? What are we going to do? Should I be terrified?” Go ahead, Dan, what do you think?

[00:01:02] DM: Yes. I don't think you should be terrified. I think the whole thing is fascinating. It's more just that we don't know exactly where this is going, and I think that's the scary part, right? But we've seen a handful of things that these things are good at. We figured out some ways to use it in daily life, and some ways that are still just pure experiments and fun, I think, they say, is that all the best things start as toys. You never know exactly when you end up figuring out how to make it real. I think we're still in the toy phase. But even the toy phase is actually scarily close to some of what we do for work. That's part of why this is freaking people out. It is very close to what you do.

[00:01:40] MN: Then, Melissa, you mentioned earlier before we started recording the term emergence. First and foremost, had to learn all these new words and whatnot. But you mentioned emergence is the scary part. Tell me what is it emergence and –

[00:01:52] MW: I found that to be scary. When I read about machine learning, emergence, and how it will learn to do things it wasn't told to do in the first place. One example is actually how we've been utilizing it quite a bit is as a co-pilot to help us code, is that's not a use case that was thought of beforehand. But it became very good at it because it looked at the entire Internet. There's lots of code examples on the Internet. Now, here it goes, it's emerging to take on this task, and I'm a little bit terrified about what else it will learn to do. A little bit of Terminator. At some point, does it become more powerful than us? I don't know. I know that's science fiction, but I'm a little terrified.

[00:02:42] MN: Feels very real, right? I've heard cases where it does a really good job at writing tests for you. It’s like, “No, that should be me. I'm the person who read the user story and know what these tests are. How do you know this?” Well, Stevie, are you terrified of AI right now or you’re standing strong like Dan in this challenge that we have?

[00:03:04] SO: No, I'm not terrified of it. I think much like Dan said, it is a tool, and like all tools, it can be used for good or bad. That being said, I am not terrified. But I'm cautious. People will use this for bad things, but they’ll also use it for good things. That balance is kind of where we sit, I think, as developers.

[00:03:24] MN: I think we’re 50/50 on the terrified and not terrified. That's good, I guess. I mean, I might get – y'all might be able to win me over. As a developer, can we agree that our jobs may not be at risk at the moment with AI as developers or as a PM?

[00:03:43] DM: Yes, I'll jump in as a PM. I'm more of a product guy now than anything else. But I was a developer for seven or eight years at college, and the thing that just struck me with this stuff is that I still know what code looks like, right? I can still think it, I can still write some of it. But literally, the last code that I pushed to production was C++ on a PalmPilot. It's been a long time, right?

So, when I actually looked at this thing, I was like, “I have no idea where the semicolons go, but I kind of know what I want. Can you write this for me in TypeScript?” That's actually what I use this thing to do. It wrote fine TypeScript. I had to figure out where to plug it in. I still had to host it. That was one of the big things was these things don't yet do DevOps, right? We might get there. There are platforms that are starting to do this and basically pumping bot code straight into a thing that runs. But there are still gaps that I think people fill in. That may not be true for everybody, but it’s true right now.

[00:04:32] MN: It's very interesting that you mentioned DevOps is a guide, a segue to kind of like the next thing I want to talk about. It's not like as developers or PMs, it seems like you can utilize the tool to your advantage, right? But what about other jobs like as a human? The first thing that comes to mind is similar to DevOps being the Internet pipes. I don't see a robot replacing plumbers, like actual pipes that the with water. Do you have an idea? Does anyone have an idea or thoughts on what jobs could be at risk because of AI?

[00:05:06] DM: I think content creation is very much at risk, just in general. It's both because a lot of content creation is built off of data that you can suck out of a database anytime you want. It's actually been true. I'm a fantasy baseball and football player. Those updates that you get on players every day, those have been machine-generated for years. They're on templates, they pull out of the data, they get sucked in from the games. A lot of this has already happened. It's just that now, this is a lot more flexible, and it goes a lot more into the creative bits.

[00:05:35] MN: Any other thoughts on another job?

[00:05:36] MW: I think there are jobs that might get harder. I've heard a lot of general talk about teachers, and professors, navigating that plagiarism line. How do they navigate that now? Now, some of their jobs may become easier. Write me a lesson plan with blah, blah, blah, and they put it all together. That becomes easier. That's time-saving. Gets them started. Maybe they fill it out more. But then they have to battle plagiarism and battle those more ethical conversations.

[00:06:10] MN: I am so glad that I am not in a position where I would be writing papers because I would have used it to plagiarize all of my papers. I was a crappy college and high school student. I'm glad that I didn't have those tools that I was forced to write really bad papers, because I would definitely lean to this. It'd be a problem. I'd be a huge problem with plagiarism and I'm so glad there weren't iPhones when I was in high school. I had to physically use a pen to write a thing. Stevie, any jobs you have in mind that wasn't mentioned could be at risk or could be enhanced?

[00:06:43] SO: Yes. Kind of in line with the content creation, to be honest. I am less concerned actually about generating content, and more concerned about the jobs that edit those content. I think that's really where AI will thrive. Because I think the best content is the content that comes from our own experience. AI doesn't experience things. Especially, not how we do. You don't have an AI walking through life.

[00:07:10] MN: Does an AI dream of electric sheep? I think that’s like a phrase or something of that nature, a book, if I recall. But I don't know. Yes. I think we could go into the leverage. I'm actually curious to hear how have y'all leveraged AI in the workplace? I think, Dan, you mentioned like, “Hey, write this thing and do it – I want you to do it in TypeScript.” I have one like real professional usage that I have for AI. And then one really, really ridiculous professional – and you don't have to come up with those two, it's fine.

I was at a client and they wrote Java, which I haven't written Java in four years. So, I had to do prompt-driven development of like, “Oh, how would I add a new item into this list?” It's like, “Oh, okay.” Then, the AI didn't fully understand, so it's just given me arrays, but like, ArrayList is the way to go in Java. I feel like continuously like, “Okay, use this ArrayList. Pretend that this is the object that I want to have the collection, and then the other item that I want to shove in. How would you do it?”

Okay, how would a real Java developer do it? What latest API would allow me to do? It talks about streams. I'm like, “Yes, people will think I'm a real Java developer, because I'm just copying and pasting this code that I just used to do the thing.” The very unhinged use of AI chatbot. Dan, you mentioned earlier in our conversation about how LinkedIn is just a crazy social media to navigate through. I actually had it for an entire day, come up with insane prompts on software development challenges pertaining to x, and was like what I experienced the last day. I was like, “Add a lot of emojis.” Believe me, I think ChatGPT knows how unhinged these LinkedIn posts are. If you haven't seen it, you can check on my LinkedIn. I think it's Michael J. Nuñez. It's the only time I've ever posted. It's crazy.

I mean, has anyone – Stevie, honestly, what have you used a chatbot or like a cartoon bot, if you will, for your daily life or for programming purposes?

[00:09:09] SO: I think the main way I've been using it thus far is a replacement for Google. So, just searching things like CSS properties and seeing how it would recommend me using it. What I found so far is that you basically need to know what they are to be able to use that information anyways. But it saves some time. Instead of having to go through five different links to make sure I understand something, it regurgitates it in a paragraph.

[00:09:38] MW: We had to update our bios at Stride, and I said, “Okay.” There was a format that people were using. I tried to get it to do that. I realized how difficult writing these prompts are, and it just was not giving me really what I wanted. I felt like I was in Galaxy Quest. If you remember, only Sigourney Weaver could talk to the ship. I felt like how do I become Sigourney Weaver? I'm not talking to the ship correctly. But I did get ideas from it, right? I did get phrases and inspired some creativity that I didn't have before. I use it for that. I'm still working on becoming Sigourney Weaver.

[00:10:22] DM: Actually, I'd love to jump in on that one. Because I think the whole creativity thing is key. It's a creative unblock. If you're in a place where you're stuck, or you're just like, “I’m not totally sure how to say this, but I know what I want to say” then you go straight to this thing. It'll be great at it. I've actually found myself not doing it as much. Part of it is if you feel you know what you want to say, and you don't want this thing to be restated in words that aren't yours, then it can be weird. If you have your own voice, if you're like, I know what I want to talk about, I find myself not going to this thing for that. But for something I either don't know how to start doing, or that I don't really understand very well, or that I know would take a lot more time to go and research myself. It's great. It's just like when you have certain expertise, I find myself not asking it about the things that I know pretty well.

But it's weird. I feel like you've heard about that effect across the industry of people who don't really know how to do something, are asking it to code for them entire applications. People who do know how to code are really skeptical, at least in a lot of ways.

[00:11:17] MN: Yes, sometimes you have to look at it – I mean, I don't know if y'all have ever experienced it, but the chatbot will straight-up lie about certain things. It's like, “No, that is not true.” Here's an example. I've used it. I'm terrified of like emailing people. I’ll often will like, “Hey, write me an email that asks the appliance person about fixing my dishwasher or whatever.” It's really formal. So, I'm like, “No, that's way too formal.” But the email that I had to generate was like, “Oh, yes, and I've done everything I could in my power to see that.” “No, I didn't. I didn’t even open it. I just saw water spilling and I need someone to fix it. Why would you say that I tried something? Don't lie to me. Don't lie for me either.” It's wild.

You always got to look at it. We always got to look at the code and find out because if it doesn't, then that'll be a problem.

[00:12:09] MW: It's definitely hard to see the truth. That's why it's not going to take over everyone's job, because there's still an evaluation of the output, that needs to happen still.

[00:12:19] DM: Although, just as a counterpoint to that, it might be that this is also the beginning of an explosion of bad code in the world, right? Because anybody can write it, and maybe it works. But it's terrible. It's like what happened in business with Excel. What happened in business with Excel is that you had the power to do macros and all this stuff that was like, otherwise, you would have had to build software, learn algebra, you could just do these stupid macros that were absolutely awful but worked. I just wonder how much you're going to see bad software making its way into the world because it's so easy now. You'll just see a lot more of it.

[00:12:54] MW: Oh, no, then ChatGPT will then feed you back.

[00:12:58] DM: It will. As it learns from the bad code. That’s true too.

[00:12:59] MW: Oh, no. That’s something to be afraid of.

[00:13:04] MN: We got to teach a design pattern. That's what we should have it do. I want to talk about – so say, can I separate and have my own personal chatbot that I feed it code, and it's only the best code, the best kind, and privatize it? Is it a way to spin that up and ensure that the information that I shared with it may not be on the Internet? I would hate for it to somehow, in another person's chatbot, spit out my social security number. That'd be terrifying. I'm curious, is there a way to make it – to silo it from a given data set and then use that to your advantage based on your context and documents?

[00:13:42] DM: Yes. There definitely is. We started experimenting with that a little bit. I think the thing that we found so far is that it takes a fair amount to just understand the moving parts. As soon as you understand all the stuff that you need to make that happen, it's totally doable. I think the thing that we're hoping to help people figure out is, what is the right size and scope of one of these language models for you? You may need it to do one very specific thing, cool. We'll train it up, we'll put it in a box, and you're done. You may want the full power of ChatGPT but with some idea of what your business is like. We can do that too, right? It really just depends on your requirements, data security things, and cost. All of that is going to slide things around. But once we know what really makes sense for you, we can figure that out. It's definitely doable.

[00:14:27] MN: It is good that we can have other – if there are some proprietary documentation or IP that companies can then silo their automation to be a little bit more specific to their business domain. I think for now, I'm probably not going to make my own and I'll make sure I don't put my social in any chat box or in any text box for that matter. Unless I know it's secure for me to do that. But I think we learned. Some people are terrified. I'm sorry, Dan, Stevie. I'm still scared. Whether it's going to be backhoed. It knows my social or the emergence that Melissa mentioned earlier. So, I'm scared. Y’all going to have to hold my hand for this.

[00:15:08] MN: But yes, I mean, it is exciting, though. I mean, I think it's been a minute since something like this has really caught on, right? That there's this small thing that grows and grows and grows, and everybody's doing. I mean, that happened with social networks. So, if you think about how that over time grew and grew and grew, and there were good and bad, and it's still evolving. I see this ChatGPT and language models being in that realm of it's still so new, we're still like the college kids, talking about who's hot or not, and then we'll grow up. We'll grow up, and then our grandparents will start using it. I think that could be where it's headed.

[00:15:51] DM: Speaking of things that will date us, this will certainly date me. I think I took an AI class in college. This would have been like 1998 and it was in Lisp, right? Because back then that was the thought was, “Oh, well, these things have to be able to program themselves.” There was none of this neural network stuff. There was like, nobody really understood how any of this worked, right? The mid-2000s, there was a guy and he's still relevant today, Jeff Hawkins, who was actually one of the founders of PalmPilot, and then basically decided he's going to take his PalmPilot money, and figure out how the brain worked.

He founded this company called Numenta, which I think is still out there. He's written a bunch of books, and the most recent one is called A Thousand Brains. If anybody’s interested, go read it. Because what it basically says is, the way the brain actually works, is a lot like how these AI models work, right? It is actually just a bunch of these different systems making predictions, and literally, the brain is a prediction machine just like these things are. When you look at emergence, that's actually a big part of it. They're essentially acting the same way the brain does, but they're not doing it with kind of all of the same systems in place. One of the most optimistic takes I've heard is that, because there's no animal brain in these AIs, they're probably not going to ever want to kill anybody. Fingers crossed. Because really, that's what the animal brain kind of is. The animal brain is fight, flight, kill, whatever. We didn't build that in. All you gave him was the neocortex. Hopefully, that's a good thing.

[00:17:09] SO: Fingers crossed.

[00:17:11] MN: Hopefully. That’s a good thing.

[00:17:14] DM: Exactly. Fingers crossed, all across the board.

[OUTRO]

[00:17:18] MN: Follow us now on Twitter @radiofreerabbit so we can keep the conversation going. Like what you hear, give us a five-star review and help developers just you find their way into the rabbit hole. Never miss an episode. Subscribe now, however you listen to your favorite podcast. On behalf of our producer extraordinaire, William Jeffries and my amazing co-host Dave Anderson, and me your host Michael Nunez. Thanks for listening to The Rabbit Hole.

[END]

Links Mentioned in Today’s Episode:

Dan Mason on LinkedIn

Melissa Wahnish on LinkedIn

Melissa Wahnish on Twitter

Stevie Oberg

Stevie Oberg on LinkedIn

ChatGPT

Numenta

A Thousand Brains

Michael Nuñez on LinkedIn

Michael Nuñez on Twitter

Stride Consulting  

Stride Consulting - Contact

The Rabbit Hole on Twitter

The Rabbit Hole