Logo
Home
language
Loading...

3 AI puzzles workplaces must solve | Martin Gonzalez for Big Think +

Слушать/Video/Big Think/3 AI puzzles workplaces must solve | Martin Gonzalez for Big Think +

3 AI puzzles workplaces must solve | Martin Gonzalez for Big Think +

Big Think
4000 IELTS Words3000 Oxford Words5000 Oxford Words3000 Common Words1000 TOEIC Words5000 TOEFL Words

Субтитры (81)

0:00I'm Martin Gonzalez. I'm a principal of  organization and leadership development at  
0:04Google and I'm the author of The Bonfire Moment.  The book explores this idea that teams are harder  
0:10than tech. In the process of innovation, it's  so important for leaders and CEOs and founders  
0:18to pay attention to the people's side of the  business because that could easily derail your  
0:23best laidout plans. We know a lot of employees  and organizations are starting to use AI for  
0:30their work. We also know that we flip-flop between  these really intense narratives of substitution.  
0:37Our jobs are going to go away. Um my role will  get replaced. Um there will be less of people  
0:43playing my kind of work, my kind of role because  of AI. And a narrative of augmentation which is  
0:50these tools give me superpowers that allow me to  do more within my role. and and I will succeed and  
0:56do well in the future if I can only adapt these  new um technologies. There's a lot we need to  
1:02think about when we think about the augmentation  model because as early research is showing as we  
1:09bring these tools into the workplace, we're  not quite seeing the kind of transformative  
1:14potential that AI has been talked about by  by its inventors. So I've started to think  
1:20about three puzzles we need to solve for as we  bring these technologies into our organizations.
1:31One of the challenges in bringing AI into an  organization is what I've started to call the  
1:37selective upgrade puzzle. This is when these  tools endow its users with superpowers but  
1:44not all users and somehow there's a selective  upgrade that happens um when these tools get  
1:50shared in an organization. This one randomized  control experiment that was run by researchers  
1:56from places like Harvard and MIT engaged the  Boston consulting group and set up their junior  
2:04consultants in control groups and in a couple  of experimental groups. What they did was they  
2:09gave them access to a um to a large language model  and they were asked to do two kinds of tasks. The  
2:17first task was a creative ideation task. They had  to help a fictitious client come up with different  
2:24product ideas that they can go to market with.  The second was a business analytics task where  
2:29they had to analyze why a business was struggling  and create recommendations. What the study went  
2:36on to discover was that when they looked at the  top performers, they tended to do much better  
2:41and when they looked at the lower performers,  they tended to do much worse. When you think  
2:45of this selective upgrade effect um spread across  thousands of employees over a span of time, what  
2:52we might see is an ever growing gap between your  best and worst performers. And this variability  
2:59would have been attributed to the use of these  AI tools where that where that gap didn't exist,  
3:04you know, before you deployed these tools. There  are a couple of things that leaders can think  
3:08about as they deploy AI in their organization.  And the first is to create really clear guard  
3:14rails around what these tools should be used for  and what they shouldn't be. And those guardrails  
3:20will possibly diminish over time as these tools  become much more um much more effective. But it's  
3:27important to go through this experimental period  understanding you know where it actually augments  
3:33the work and where it actually takes away from the  work. Another thing to consider is it's important  
3:39for the users as they leverage these tools for  certain domains that they have a certain basic  
3:46level of of expertise in these domains. It allows  the users to apply good judgment when a tool is  
3:54actually leading them in the in a worse direction  um and when it actually is augmenting the work.  
4:00Using a tool when you have zero knowledge of  that domain is a very very dangerous proposition.
4:14As we think about bringing AI into our  organizations, we need to think about this  
4:18agentic preference puzzle. We as humans have a  tendency towards control. And when these tools  
4:26take away control from the work, we see that  adoption rates drop. There are some fascinating  
4:33studies done out of Wharton that explore this idea  that they called um algorithmic aversion bias. For  
4:39example, when was the last time you decided to  override what Google Maps or Ways told you was  
4:46the right way home? We'll sometimes believe  that we actually have better or have a lower  
4:51error rate than these machines. And what  this branch of study had had looked into  
4:56was when individuals actually purs or see a a an  algorithm commit an error, even if that error rate  
5:05is still lower than the human error rate, we  we would much rather trust our human judgment  
5:10over the algorithm. It goes on to explain that  perhaps one way to think about this is when we  
5:16think of algorithms and these AI bots, their  error rates are knowable and they're static,  
5:24but human intuition and human intelligence is  perfectable and perhaps we therefore trust that we  
5:31can perfect our own judgment in certain tasks. The  research goes on to then try to figure out what's  
5:36the right antidote to this and they allow in one  study they allow users of these algorithms to  
5:43tweak ever so slightly um different parameters of  these algorithms. When these people are given that  
5:51um that leeway to control the algorithm um what  you find is that the error rates will increase as  
5:57a result as you would expect. But then you  also see the adoption rates significantly  
6:02um increase because people can control it. And  this drives home a really valuable point around  
6:09adoption of these AI tools. As a leader, you  might think about what is an error rate that  
6:15is acceptable if only it means that you then  create a lot more adoption in the workplace.  
6:21The ideal scenario is that people adopt these  tools fully without tweaking them. But we know  
6:27that that comes at a cost of lower adoption.  Are we willing to to sacrifice some amount of  
6:32um precision in the use of these tools in  exchange for an improved level of adoption?
6:46The final puzzle is this self-sufficiency  spiral. If you think about all the work we do  
6:51in an organization, you can categorize them into  solo work and interdependent work. And you might  
6:58say that in the future, these tools will allow us  to do a lot more solo work. And a lot of the solo  
7:06work will colonize parts of the interdependent  work. And then what gets left behind as  
7:12interdependent work, whether it's writing emails  or doing presentations or conducting meetings,  
7:19a lot of these tasks will then get intermediated  by these AI tools. When you think about what  
7:24it takes to create culture in an organization  or the role of the leader in kind of bringing  
7:31people together around a shared mission, a lot  of that is about interactive tasks. A lot of  
7:36that is about not being in solitude and doing  isolated work, but actually coming together as  
7:42a group. And if the future of the workplace  is a lot more solo and a lot more isolated,  
7:49I worry a little bit about what this means for the  future of organizations and our ability to create  
7:55cultures and create a sense of identity with the  organization. We've seen other technologies in  
8:02the past kind of deliver to us a future that we  didn't quite want. You take for example social  
8:08media where it had this promise to create  a more connected world but instead what it  
8:13gave us is possibly a more fragmented polarized  world where we perhaps have expected less from  
8:19each other and as MIT ethnographer um once said  we are alone together through these tools. So  
8:27we don't want this future for the workplace  and we need to think about ways that we can  
8:31bring people together through perhaps different  means and and and different approaches so we can  
8:37continue to create you know thriving environments  for for people as they engage with these tools.