If you’re a senior leader, managing technology has never been more challenging—especially as organizations struggle to deploy generative artificial intelligence. Since ChatGPT burst into the mainstream a year and a half ago, everyone has been scrambling to make sense of how to use these tools, what they can and can’t do, and what they mean for our work and our teams.for research, stories, and advice to make technology work for you and your team.
TOM STACKPOLE: We’re both senior editors covering technology here at the Harvard Business Review, and in the last year, we’ve seen more pitches than we can count on generative AI. Juan, how would you say the questions we’re seeing have evolved over the last year, and where are we now? TOM STACKPOLE: So I think you make a lot of good points. For me, there are still a few things that give me pause, especially when thinking about adoption at scale.
ETHAN MOLLICK: So the crazy thing about the state of AI right now is that nobody knows anything. I talk to all the major AI labs on a regular basis, we have conversations, and I think people think there’s an instruction manual that’s hidden somewhere. There is not. Nobody knows anything. There’s no information out there.
And then I would say the third reason is once you get over the freakiness, it’s super interesting and fun to explore. Like this system does a lot of things that are really neat, and you can be the first person to figure out what those things are. TOM STACKPOLE: In the book, you talk a lot about how you used AI to write this. Can you tell us a little bit more about what that process was like, what you learned worked well, what didn’t work well, how that whole process changed how you were thinking about using these tools?
TOM STACKPOLE: So one of the things that you write about that I think is really useful is you have a great way of thinking about the kinds of work tasks that we should continue to do versus the kinds of tasks that GenAI might be able to take on. Can you kind break these down into categories for us? TOM STACKPOLE: Yeah, I mean, one of the things that’s been interesting with critics of some of these LLMs is people were saying, “There are limits to this architecture. There is going to be a plateau, and it’s going to be coming sooner than people think.” I mean, what do you kind of think of that argument?
TOM STACKPOLE: In this context, you’ve talked about how important it is to still have expertise. My first magazine job was being a fact-checker, and it was brutally slow and surprisingly hard because there’s facts kind of baked into all kinds of things and it’s not always immediately obvious what kind of assumptions are being made.
JUAN MARTINEZ: Well, you’ve actually studied this in a real-world setting. Can you talk a little bit about the study with BCG. They were given ChatGPT-4 to perform some of their work tasks. How did it work, what did they do, and what did you learn from it? ETHAN MOLLICK: I think if you’re an enterprise owner, there’s a lot to think about, or as a manager, because the incentive right now is for your employees to secretly use these systems, and they’re doing that all the time. You could think about the reasons.
TOM STACKPOLE: I think the story is about the hazards of how this could be applied by companies are really interesting, and I want to look at a different study, this one by Harvard Business School researcher Fabrizio Dell’Acqua, and he studied recruiters using AI. Some were given a good AI, some were given a mediocre one.
So one of the things that you articulate I think really well in this book is that there’s this tension between how easy it is for individuals to be innovative with these tools and it’s really, really hard for institutions to do the same thing for a variety of reasons.
ETHAN MOLLICK: I mean, one of the more both effective, I think, and also more extreme effects was IgniteTech, which is a software holding company. The CEO basically got into the idea of AI very early and gave everybody GPT-4 access last summer and said, “Everyone needs to use it,” and he has told me that he then fired everybody who didn’t put a couple hours in by the end of the month, but he also offered cash prizes to anyone who gave the best prompts.
ETHAN MOLLICK: So just for people who aren’t that familiar, frontier models are the most advanced models, and right now, there’s a very strong scaling law in AI, which is the bigger your model is, which also means the more expensive it is to train, they’re just smarter, and the result is that the most advanced frontier models are often much better than specialized models built for specialized tasks.
المملكة العربية السعودية أحدث الأخبار, المملكة العربية السعودية عناوين
Similar News:يمكنك أيضًا قراءة قصص إخبارية مشابهة لهذه التي قمنا بجمعها من مصادر إخبارية أخرى.
مصدر: HarvardBiz - 🏆 310. / 63 اقرأ أكثر »