Webinar Recording
AI Power Users: New Research on AI Coaching and Performance from Analog Devices
Most organizations measure AI coaching success by one metric: did people log in? That's like measuring a gym membership by badge swipes. It tells you nothing about who got stronger.
Analog Devices took a different approach. Working with former Google people analytics leader Prasad Setty, ADI studied 45,000+ AI coaching sessions over 15 months to understand what meaningful engagement actually looks like — and whether it connects to real performance outcomes.
In this session, you'll learn:
- How ADI embedded AI coaching into performance reviews, feedback, and daily work across 10,000+ employees in 30+ languages
- What the Power User Index measures — and why volume, breadth of use, and conversation quality matter more than adoption rates
- The research finding: power users are 31% more likely to move up a performance band
- Why managers are the single biggest lever for driving team-wide AI coaching adoption (the 2x effect)
Your speakers: Jennifer Carpenter is the Global Head of Talent at Analog Devices, where she led one of the largest enterprise AI coaching deployments in the world. Prasad Setty is the former VP of People Analytics at Google who led Project Aristotle; he now leads research at Valence.
.png)
Jennifer Carpenter
VP, Global Head of Talent

%20(1).png)
Prasad Setty
Founding VP, People Analytics
.png)




Key Points
Key Takeaways
- Power users move up performance bands 31% more often. Research at Analog Devices covering 11,000+ employees and 64,000+ coaching sessions across two performance cycles shows employees who use Nadia consistently, across varied topics, with active cognitive engagement are 31% more likely to advance a performance band than nonusers.
- Managers drive AI adoption across their teams. Managers at ADI are 2.98 times more likely to be power users than individual contributors. Teams whose managers actively use Nadia are twice as likely to engage with it themselves, making manager enablement the primary lever for AI adoption at scale.
- Power usage combines three dimensions. Session density (consistent return over time), breadth of use cases (performance, feedback, goal setting, team conflicts), and cognitive engagement together define a power user. All three dimensions matter for performance outcomes, and frequency alone misses most of the signal.
- Equitable rollout design shapes who benefits long-term. ADI built its initial Nadia pilot with deliberate 50/50 male/female representation after recognizing that a manager-only rollout would have skewed access. Fifteen months later, women at ADI use AI more than external global research predicts, running counter to the broader trend of women using AI less than men.
- Nadia and Copilot serve different work in the enterprise AI stack. ADI managers gravitate to Nadia for people work (coaching, performance conversations, team coordination) and to Copilot for individual productivity tasks. Enterprise AI is a multi-tool stack, and purpose-built coaching tools complement general-purpose LLMs.
- Three design levers create power users in the first 30 days. Introduce Nadia with a complex leadership challenge, guide users to stay in the conversation until they leave with a clear next step or artifact, and encourage a return visit within the first five days. These early design choices shape long-term usage patterns more than individual personality.
Full Session Transcript
Rolling Out AI Coaching at ADI: An 18-Month Journey
Ellie Wildman: Welcome all. Thank you all. I see we have some great comments in the chat already. As we're continuing the discussion today with Jennifer and Prasad, this is meant to be a discussion for everyone. Post questions in the chat, thoughts, reflections. As we're getting started, if everyone can post where they're calling in from today. I'm always impressed where everyone's calling in from. Jennifer, Prasad, where are you calling in from?
Jennifer Carpenter: I am calling in from my living room here in Ridgefield, Connecticut outside New York.
Prasad Setty: And I'm in the Bay Area, and it's just stopped raining out here, so it's a nice sunny day.
Ellie Wildman: Fantastic. And I see Hein is calling in from the Caribbean. I am typically based out of Boston, but calling in from the San Francisco area today. We have some other California, Houston, Germany, Chicago, Ireland, Boston. For those of you that I don't know, I'm Ellie Wildman. I'm a director here at Valence, and I work with our clients every day to think about how they are bringing AI coaching into their organizations and how to actually create that behavior change.
Ellie Wildman: One of the reasons I'm excited and interested in the conversation today is that we're continuing to hear over and over again this question of how do I actually measure AI outcomes, both for AI more broadly and for AI coaching. It's been a pleasure to work with Jennifer and Prasad, who are incredible thought leaders in the space. Jennifer is calling in from the Chief Talent Office at ADI. She has a deep statistical background, and she's one of the most data-driven leaders that we work with. Prasad Setty is an absolute legend in the space, from Project Aristotle to Project Oxygen. Jennifer, Prasad, do you both want to do a quick intro?
Jennifer Carpenter: You did such a nice job, Ellie, thank you. It's really a pleasure to work with you and Prasad on this research. It's been such a treat, and I really appreciate the partnership helping us all collectively figure out how to observe employee behaviors, how we support employees better, particularly in the rapid changes we're all experiencing with AI.
Prasad Setty: Thank you, Ellie. It's great to see what ADI and Jennifer are up to. The data scientists at Valence are amazing, so it's been a pleasure for me to continue contributing to the field of people analytics and seeing new sources of data and new types of tools and engagement. Excited to share this story and more in the future.
Ellie Wildman: Before we dive into the ADI story and some of the findings, I'll go through a high-level overview of what we'll talk through today. First, we'll hear a little bit more about how Nadia power users are 31% more likely to move up in performance bands, which is impactful as we're thinking about impact at scale. Then we'll talk about managers specifically as a group and audience, how they're thinking about utilizing AI. We found that managers are 2.98 times more likely to be power users. Then we'll dive into what that means for their teams. We understand why managers are using AI so much, but what does that actually mean for their teams more broadly?
Ellie Wildman: A huge topic that has been coming up is Nadia and Copilot. What is the interaction between the two? Does using one AI agent help you develop trust and build more usage in the other? Jennifer has done some great thinking here. Then we'll close by thinking about how we develop power users. How do we drive adoption? What does that look like for deployment? How do you think about the first 30 days of a user's story and engagement? Jennifer, do you want to talk more about the ADI story to kick us off?
What AI Coaching Looks Like Inside an Enterprise
AI coaching uses purpose-built AI products to scale personalized development support to every employee in an organization. Unlike general-purpose large language models, an AI coach like Nadia remembers past conversations, understands team dynamics, learns company culture and internal frameworks, and supports people work specifically: performance reviews, difficult feedback, goal setting, leadership challenges, and team coordination. At Analog Devices, 65% of employees and 77% of managers actively engage with Nadia, Valence's AI coach.
Jennifer Carpenter: Let's rewind the clock about 18 months ago. It feels like just yesterday, but in so many conversations about AI, that feels like ages ago. We were beginning to introduce AI at ADI and exploring different partners and different avenues. We wanted to first start with a very people-centered approach: how do we help people? When we discovered AI coaching, Nadia specifically, we thought this would be a low-risk use case that applies well to the way we work.
Jennifer Carpenter: We work across the globe. This tool and many others provide multi-language support. As a talent leader for many years, it's an interesting way of supporting our people when you can provide a product in their native language. That's a brilliant advantage that AI gives us no matter what product you might be using.
Jennifer Carpenter: ADI has grown a lot in the last couple of years, and we grew through acquisitions. We had multiple cultures integrating all at the same time. We were hiring a number of people, so we were onboarding at scale, and we had managers managing teams across time zones. This probably sounds like a lot of the same challenges you all are facing.
Jennifer Carpenter: We wanted to experiment with AI coaching to support employees around time zones in a way that invited them to engage with AI in a low-risk use case. This is here to help you with leadership development, with supporting your teams, with interpersonal conflicts, with your own personal productivity. Just give it a try. See if you like it.
Jennifer Carpenter: We started with an invite-only approach, really inviting people to give it a shot, and we were really surprised by the rapid uptake. So far, we've had about 65% of our employees actively engage with Nadia across these various geographies and countries. 77% of our managers have. We've begun to customize this to be a support across multiple talent practices at ADI: from performance reviews to engagement surveys to team coaching. We're seeing this as augmenting our existing talent processes and giving us a bit of a boost to support our employees.
Measuring Performance Review Impact and Employee Sentiment
Jennifer Carpenter: Most recently, we have embedded AI coaching, Nadia, into our performance review process, a process we all know and love. As we've gone through our performance review process within our company, we provide employees an opportunity to give their self-input, and then managers write the reviews considering their self-input.
Jennifer Carpenter: Ellie mentioned, I love my data. The team and I pulsed throughout the experience, sentiment measures on how people were feeling, from the beginning to the end of the performance review experience. A notable finding came out: when employees engaged with Nadia, they had a significantly more positive experience than those that didn't use Nadia at all. Across all of our sentiment measures, 80% strongly agree or agree favorably, compared to those that did not use Nadia.
Jennifer Carpenter: For managers specifically using Nadia to write reviews: 85% agreed and strongly agreed that it saved them time. Equally importantly, a similar percentage said it helped them improve the quality. One thing Prasad and I are continuing to research is whether the employees receiving these evaluations thought the same thing. Did they see an improvement in the quality of the evaluations they received from their managers? We'll continue to build on our research to also study whether the employees themselves see equally strong benefits.
Ellie Wildman: I love that, Jennifer. You were intentional about using performance management as a moment to test Nadia, because you're not just experimenting on performance reviews; you're able to see performance more broadly. It'll be interesting to see the implications not just for the individual but for the team as well.
Jennifer Carpenter: Before we move on, I want to highlight for the talent leaders in the audience: pick your moment. We knew we wanted to help employees with a painful administrative burden of writing their self-input. Nobody likes to do it. It's hard. It creates anxiety. AI was drafted into that moment, and employees overwhelmingly told us it improved the quality of their experience. It reduced their anxiety. It reduced their stress. It helped them do this more quickly and better. I would challenge us all in the audience to think about what are those moments that matter, as Ellie said, and to reduce some of that mental load. That's what I think AI is really good at.
Designing a Natural Experiment to Measure AI Impact
Ellie Wildman: It's a great call, Jennifer. This idea of identifying what the pain points are for employees, and how we make sure there's support in the moment. Prasad, do you want to talk more about the research itself, how we thought about designing it, and the project overall?
Prasad Setty: As Jennifer shared, one of the key things I see at ADI is this thirst for information and analytics right from the beginning, just as soon as they started implementing this rollout of Nadia. Whenever you roll out a new product, you go through three different layers of an analytics journey.
Prasad Setty: The first step is adoption. Are people actually finding this new product, this new tool? Are they signing up? Are they logging in? That's the first thing you do, and if you don't solve that, then everything else becomes moot. Once you get past adoption, you're thinking about trust and engagement. Are people reusing? Are they coming back to the tool? Are they trusting what their interaction with the product is about? Are they telling their peers? All of that is a signal of something that goes beyond adoption, because adoption metrics can hide behind these kinds of metrics. With Jennifer and ADI, we see a very strong sense of how they went through that adoption phase and then the trust and engagement phase.
Prasad Setty: That gets us to the exciting part, which is, today we can talk really about that impact layer. Impact is really where all of you are trying to make sense of whether all this investment that you're putting into tools like Nadia is paying off. Whenever it comes time to impact, there's a key question that we all have to ask: this notion of isolation. Can you trace that impact back to that particular product or tool? That is a really hard challenge. The gold standard is what is called a randomized control trial; this is what they do for new pharmaceuticals. But in a real organization, those things are quite a bit of a luxury. Business momentum requires that we get to use it quickly. When you're trying to make sure that everyone in the organization can adopt and benefit from these new tools, you don't want to tell people, for six months you're not going to be able to use it. You're not going to be able to create those kinds of holdout samples.
Prasad Setty: What we have in this case with ADI is what we would refer to as a natural experiment. Three things differentiate this from just a typical adoption study or a cross-functional time study. One aspect is that we basically have a cohort that is in between two separate time periods that we have performance ratings for. ADI, just like everyone else, has a performance cycle. They had a measure of the performance reviews and the performance ratings of people at two different time periods, in '24 and '25. We are limiting this cohort for this analysis to people who first signed up to Nadia in between these two performance periods. That way we are seeing that the effect is due to the introduction of a new tool.
Prasad Setty: The second part is that we're looking at people's performance ratings across these two time periods, so we can see that everyone has their own baseline, and we're able to measure their change from that baseline. The third part is that we have people who have self-selected themselves into different types of usage, this notion of power usage that we are going to define for you. We have people with different types of interactions with Nadia, and it allows us to say: when the magnitude and the impact of the conversations, the interaction that you have with Nadia changes, how does that then influence performance? Broadly, we have more than 11,000 employees in the study, more than 64,000 coaching sessions, and the research has spanned across multiple times.
The Three-Dimensional Power User Framework
What Defines an AI Coaching Power User
A power user of AI coaching engages across three dimensions: session density (consistent return over time, not just heavy short-term usage), breadth of use cases (performance reviews, feedback, goal setting, strategic planning, team conflicts), and cognitive engagement (actively thinking and framing rather than passively accepting AI output). At Analog Devices, power users averaged six times more conversations with Nadia, 14 times longer engagement, and twice the topic variety of casual users.
Prasad Setty: One phase you might start off with is looking at who is using it more frequently. As we were discussing what a power user should be, we said it shouldn't just be limited to the frequency of usage. We don't want to believe that heavy users are power users. Usage frequency is one input, but the session density measure that we think about also looks at consistency over a period of time. For example, think about someone who might have 20 different interactions with Nadia in one month and then they don't sign up again, versus someone who comes in five times a month but consistently does so for the entire year. Very different types of interactions. In the session density measure, we rate the latter usage more favorably.
Prasad Setty: The second dimension is the breadth of use cases. We are looking at how many different ways people interact with Nadia, how many different types of topics they are engaging in. Jennifer mentioned some of these already: performance reviews and performance ratings, feedback, goal setting, strategic planning, team conflicts. There are lots of different topics. The more people bring in, the more we would consider them to be a power user.
Prasad Setty: The third is about the quality of human cognition. We are looking at what the quality of the thinking is that the user is bringing in. We want Nadia to add value into the conversation, but we also want to see how much thinking the user is bringing into the conversation, because that richness is what helps us think through whether someone is a power user or not.
Prasad Setty: Let me give you a feel for what a casual user might look like versus a real power user in our definition. A casual user might have a manager tell them: we have this new tool called Nadia, I want you to give difficult feedback to someone on your team, why don't you work with Nadia on this and help deliver that feedback. The person might do that and forget about it. Six months later it's time for another performance cycle, and they go back to Nadia for that. That is what we would call a casual user.
Prasad Setty: Imagine someone with a very different interaction model. On Monday, they say: I have to let this senior engineer go on my team, and I need to think through how I'm going to do this. But first, I'm going to talk to my own manager about whether I'm seeing this clearly. Let me prepare for my meeting with my manager. Wednesday comes around, they've had the meeting with their own manager, they've got more feedback, and they discuss that with Nadia. Next week they get the signal they're going to inherit a new team, so they come to Nadia with a conversation on: help me with an onboarding plan for the next 90 days. The week after that, they might come back to that low performance situation again.
Prasad Setty: You can see that the interaction with Nadia is much richer and much broader. When you translate it into the quantitative signals that we have: power users, in our definition, have almost six times the number of conversations with Nadia that a casual user has. They have a much longer lifespan in terms of their interaction with Nadia, more than 14 times. They discuss more than two times the type of topics as a casual user would. That's a framing of how we have thought through this power user index.
Jennifer Carpenter: For the practitioners in the room, this is important. I get the question all the time: how do we measure AI adoption? Understandably, right now for many of us it's just about getting licenses in people's hands and getting them to start experimenting. That's okay. That's step one. But what Prasad's research is really getting at is that three-legged stool. It's about frequency of use, it's about variability, and it's about how much we are going back and forth with the AI to really get the richness. I think this is a concept that is applicable beyond just AI coaching to all different other kinds of tools. If you're being asked how do we measure, thinking about not just frequency but depth and variability and what people are using the tools to do is a really interesting construct to apply not just to AI coaching but to the other AI tools and products you're wrestling with within your company.
Measuring Sentiment While Protecting Employee Privacy
Ellie Wildman: Prasad, we got a few questions around this idea of it being challenging to measure, specifically on that third criteria, the use case variety. Could you talk about how we think about that?
Prasad Setty: As with everything, use case variety can be measured in multiple ways. I won't get into the technical nuances, but we essentially get to take a look at the sessions as they are happening, the themes and the topics, and so on, and can measure it along those dimensions. At Valence, we are able to do that. For many of the tools we see out there, many are coming out and able to provide end users, I should say, companies, with those kinds of insights. The power users that we see in this study are folks that have a rich set of topics that they are engaging on.
Jennifer Carpenter: Just to highlight: if you're using this tool, you will not be able to see what your people are talking to Nadia about, and that's appropriate. We don't want to see. They were saying that this is a trust-based tool. You can have your conversations with Nadia. We will not be looking. The data scientists behind the firewall at Valence are able to look at sentiment around variability. What I would stress is, regardless if it's Nadia or other tools, maybe not now but soon, vendors will be providing us as talent practitioners this kind of sentiment analysis to get better insights into topics and variability as a key measurement of your organizational signals. While many of us might not have it now, this is the future, and the future is coming very fast that we're going to be able to see across various AI tools the sentiment analysis to get more of a pulse on what's happening within our organizations than we've ever had before. It's an exciting time, and I think we're just scratching the surface with this study.
Ellie Wildman: Really powerful, Jennifer. As we're transitioning, last question in the chat for you: we got a question around how managers are actually using Nadia for the self-review and the manager review. People were wondering if Nadia was offering up feedback and what that looked like in practice.
Jennifer Carpenter: What's very important, as someone who's been in the performance management game a long time, is that Nadia is not evaluating employees on behalf of the manager. Much like having a coach sitting right next to you, you are talking about Prasad's performance and thinking through: how do I give him constructive and helpful feedback? How do you serve as a writing assistant? What should I be thinking about? Giving me those probing and prompting back and forth to refine my thinking. Nadia is not taking information and whipping you up your own sandwich and a side of chips. She needs to work with you much like an assistant would, a coaching assistant, to help you craft meaningful and helpful feedback.
Jennifer Carpenter: A lot of our managers are saying, Nadia asked me good questions about her performance. I gave her a natural speaking in any language I wanted about what I thought about performance, and we worked together to craft insightful feedback to help them improve, whether it was to obtain stretch goals or help frame feedback in a more constructive way. I really want to strike that important point that she's helping with a human in the loop to improve the quality and the impact of feedback. She's not evaluating or assessing people without the manager's support.
The 31% Performance Lift and Why Managers Drive It
The Measured Impact of AI Coaching on Performance
Research at Analog Devices across 11,000+ employees and 64,000+ coaching sessions shows that Nadia power users are 31% more likely to move up a performance band between review cycles. Lower performers showed the largest lift (over two times more likely to advance to "meets expectations"), mid-performers gained roughly 21 to 23%, and high performers were more likely to remain high performers compared to nonusers. The pattern holds across all performance levels.
Jennifer Carpenter: Thanks to the partnership of the data scientists and Prasad, we were able to study more closely the interesting patterns that were emerging with these power users. A reminder: these power users are having six times more sessions with Nadia, for longer periods of time, across a variety of conversations and uses. What's so special about them? What we found is that between these two performance periods, they were more likely to move up a performance band. Not causal, but an emerging finding that these individuals may be getting a performance boost thanks to the support they're getting from AI coaching.
Jennifer Carpenter: We also found that managers were more likely to be power users. Even more importantly, if a manager, and we like to say here at ADI, is casting the right shadow, is reflecting positive AI use, their employees on their teams are more likely to be power users. I cannot understate the point enough that the way we are reflecting out positive use of AI is directly impacting how the people that are around us are responding. If we believe, as this evidence is showing, that it's a positive indicator, if you're using Nadia and you're seeing positive results, your team is also going to be able to see positive results.
Jennifer Carpenter: We have a five-point rating scale at ADI. For simplified purposes, we collapsed that into three points. When we looked at our lower performers, those that may not be meeting expectations all the time, those that were power users versus those that were not in that rating category: between the time they had Nadia and the time we looked at their performance, they were over two times more likely to move up into the "meets expectations" category. There could be many other reasons why performance moved up, but those power users who are leaning in and actively working through coaching to improve their own performance showed demonstrated change in those ratings.
Jennifer Carpenter: In that mid-range, the people that might be meeting expectations all the time to move up, we saw less of that jump. We saw about 21 to 23% more likely, because we also think it might be that you're already performing really well and moving up incrementally even better might be harder. But we saw similar patterns still emerging with our power user audience: those people who are leaning in and showing hard work are moving up a rating in this particular study. Anything you would add to that, Prasad?
Prasad Setty: People might ask, Jennifer, what about the high performers? What happened to them? Given how your rating scale goes from low, medium, high, high performers can't go any higher. But what we do see is that those who are high performers in the past and then started using Nadia, many more of them remain high performers compared to those who are nonusers. Across all of these performance levels, we basically see that power users have much better results than those who are not using Nadia.
Jennifer Carpenter: In summary, across both groups, regardless of where you were before, we see Nadia influencing the performance band and having you move up a performance band about 31% more likely to do so. Still emerging findings, still lots of studies to complete, but really favorable directional evidence that offering tools like this to our people, and encouraging them to be power users, meaning come back, get help on a consistent basis, not just for one use. It's not a one-hit wonder. How is it going to help you across a variety of conversations? Really teaching our people that you don't just go in and ask any AI tool. It's not Google. Don't ask it for an answer. The way this agent is built, where it's helping you think, asking you the right questions, and you go deeper and deeper with this tool, seems to really help promote reflection, forming positive performance habits, and seeing demonstrative change in the way performance can be delivered.
Ellie Wildman: The way you specifically picked performance management too is a great use case. You're pairing qualitative data, quantitative data, to look more broadly at what the impact is. You already have the five-point scale to see change over time. I know you were also interested in some other findings. Do you want to walk through what the implications are for managers?
How Managers Drive AI Adoption Across Their Teams
Managers are the catalyst for AI adoption across their teams. At Analog Devices, managers are 2.98 times more likely to be power users of Nadia than individual contributors. When managers actively use AI coaching, their direct reports are twice as likely to use it themselves. Teams whose managers do not engage with AI show lower adoption, making manager enablement the primary lever for AI adoption at scale.
Jennifer Carpenter: All of us are struggling with the same questions. One, do these tools help? Just simply, do they help? Yes. We're finding it's helping save time, improve quality, improve performance in some emerging findings. But then how are we supporting adoption? This is a fraught time. There's some love and hate going on with all AI products. How are we really taking a human-led approach and making sure we're understanding how we drive adoption in a positive way, people-first perspective?
Jennifer Carpenter: What we saw emerge strongly, and I've said this before, it bears repeating: managers are key in driving adoption. I often like to say managers are the catalyst of AI, not the casualty of AI. We see sometimes in the popular press leaders, to remain nameless, going after the mid-managers: let's cut managers, we don't need managers in the AI era. I strongly disagree with that view. I think managers are a catalyst, particularly now, and are even more important. What we're finding is that our managers are leaning in even heavier than our individual contributors.
Jennifer Carpenter: They need help. AI is helping them. Number one, they're leaning in, they need help, they're more likely to be using these tools. And when managers use Nadia, their teams are twice as likely to use it. Similarly, if they're not, their teams are less likely to use it. I worry that if we're not winning over the hearts and minds of managers in this important time, we're doing a disservice to their teams, because those that are using these tools are getting a head start, and these people are being left behind and maybe even being encouraged to be left behind by the very people they trust to lead them. It's really important for us as HR professionals to win over our managers and make sure they really matter in all of this and equip them to be the leaders we need them to be.
Jennifer Carpenter: We also saw spans and layers. If you're an HR practitioner, you probably cringe when I say spans and layers. Those with larger teams are modestly using it more so than those with smaller teams. We're also exploring that angle and continuing to study it.
Ellie Wildman: This idea of as teams get larger and more complex, either from a headcount standpoint or more AI tools and change more broadly, what does that mean for the actual manager on the team? These results are quite powerful.
Ensuring Equitable Access to AI Tools from Day One
Designing Equitable AI Rollouts at Work
Equitable rollout design shapes who benefits from AI tools long-term. Analog Devices initially planned to pilot Nadia with managers only, then redesigned the pilot with 50/50 male/female representation. Fifteen months later, women at ADI use Nadia more than external global research predicts, running counter to the broader trend that women typically use AI less than men. Early pilot composition has outsized long-term influence on adoption patterns.
Ellie Wildman: Jennifer, you've done fantastic work around this idea of other audiences and how you're thinking about deployment to make sure ADI is set up for success. Do you want to talk more about women specifically?
Jennifer Carpenter: If I could shout from the rooftops, I would. If you are early in experimenting with AI and making access to these tools available through pilot groups, be extremely mindful on who you are enabling. When we initially were going to roll out Nadia in a pilot, we were about to make a really critical error. I was only going to roll this out to managers. Then I paused, and some really bright people on my team said: look at the makeup of managers compared to the general population. Let's just say it wasn't 50/50 male/female. Instead, early days, we made sure that we were rolling out AI tools as equitably as possible and made those early pilot groups 50/50, male/female.
Jennifer Carpenter: I don't know if it was that, correlation versus causation is very difficult to measure here. But what we found now 15 months later is that women are using Nadia more than external studies show women are using AI. Women, as you can see, are represented in our power users at double their rate of the general population, and they're in general using Nadia more than men. We still have to study why that is.
Jennifer Carpenter: Why I'm so excited about this finding is that in the external research, really controlled, well-constructed, global research says women are using AI less than men when you control for all factors including the job they're in, the country they live in, their experience level, everything. One thing I would encourage us all to think about: as we roll out these tools, whether it be Nadia or anything else, how are you ensuring that we are democratizing access and that we're not driving systemic patterns that would bake in bias long-term?
Jennifer Carpenter: What I was excited to see is that when we constructed our pilots and made sure they were equitable in nature, we're actually seeing a trend that is bucking external research. Women are using Nadia, and also other AI tools at ADI, just as much or more than men here at ADI. It's a framework I think we all need to be thinking about, because sometimes when we roll out tools, we might not realize some of our rollout or deployment strategies might have unintended consequences.
Ellie Wildman: The intentionality you've taken with deployment, both around launching to managers, individual contributors, but also who is in that group, has had a huge impact on who's engaging with AI at ADI more broadly.
How Nadia and Copilot Serve Different Work
AI Coaching vs General-Purpose LLMs in the Enterprise
AI coaching and general-purpose LLMs serve different work in the enterprise stack. At Analog Devices, managers gravitate to Nadia for people work (coaching conversations, performance reviews, team coordination) and to Copilot for individual productivity tasks. Nadia is built for coordination, planning, and management contexts, understanding team relationships over time. General-purpose tools focus on individual output. Enterprises increasingly use both alongside each other, with tool choice matched to task type.
Ellie Wildman: Switching gears, a huge question we're getting, especially in the past four to six months, has been from leaders. We already have Copilot. Do we need another coaching tool? How are you using this differently? Jennifer, I know you've done some thinking about this and looked into the different usage patterns between Nadia and Copilot at ADI. Could you walk us through what you found?
Jennifer Carpenter: This is important because remember how I said it's only been 15 months but it feels like ages ago? When we first introduced Nadia, people were asking: what's an agent? The word "agent" hadn't gotten absorbed into our brain yet. People were saying: is this just like ChatGPT? What's the real difference? Now, nearly 18 months later, we do at our organization use Copilot 365. We have that more generic large language model. You might use Gemini, you might use Claude, fill in the blank what you might utilize at your organizations. How does that compare to how we're using Nadia was the question I was studying.
Jennifer Carpenter: When we look at both populations side by side: they're using both tools. The first hypothesis was that as we introduce a large language model, a utility player more broadly, somehow we wouldn't see them use another product. We're not seeing that. How that translates to all of us on this call is that we're going to be using multiple tools for many things. We will be shifting tools. There's a tool for every task. We're seeing that playing out in these early days.
Jennifer Carpenter: We are seeing managers gravitate more toward Nadia for management-type work. The idea that Nadia has been designed specifically to help with the people work, it's clearly doing its job. Copilot is more of that utility player versus Nadia as an agent crafted to help for this particular work. We are seeing people gravitate to the different tools for different types of work. That bears out in the span of control. Those people that have the bigger teams, we're seeing them using Nadia more. The people that have to focus on people coordination, even within HR or project management, people leaders, we're seeing quite a bit of heavy use in Nadia for that kind of people coordination work, while we're still continuing to see adoption and ramping on Copilot that's more focused on the productivity of an individual output.
Jennifer Carpenter: We do see people who are using both. It's fascinating to me that our top-performing people are always our top-performing users. I don't think it's necessarily causal, but regardless of the tool, the heaviest users are those people who are the highest-rated people. The high performers are finding these tools faster and are using them more frequently than anyone else.
Ellie Wildman: Really helpful. Before we move on, Prasad, anything else you would add here?
Prasad Setty: This is a great summary, and the role focus is something that I would have people keep in mind. In the previous generation of using SaaS tools, the average number of SaaS tools that any enterprise was using was more than 100. Similarly, in the AI era, you're going to have a tech stack that looks very different across different people, across different roles. For coordination, planning, management, those types of tools, Nadia is built for those kinds of contexts and helps understand your relationships really well. That is where you probably want to stress the usage of something like Nadia, and focus on the individual output stuff with Copilot or Gemini.
The First 30 Days: Three Levers for Creating Power Users
Three Design Levers That Create AI Coaching Power Users
Three design levers create AI coaching power users within the first 30 days. First, introduce users with a complex leadership challenge (team management, difficult feedback) rather than a narrow task like calendar help. Second, encourage users to stay in the conversation until they leave with a clear next step or artifact. Third, drive a return visit within the first five days to establish a habit-forming pattern. These design choices matter more than individual personality traits.
Ellie Wildman: The call-out of task versus coaching is helpful. Prasad, given what we've seen about AI coaching power use, a big thing you've been thinking about and researching is: what are the levers that are most valuable in the first 30 days to create power users? This is relevant for this audience as they're thinking about deployment and how to create that behavior change. Do you want to walk us through some of the initial findings?
Prasad Setty: All of these layers build upon each other. We said, let's define power users. Let's see if they have impact. As Jennifer showed, it does look like that does impact performance reviews and performance ratings over time. The next question that comes to mind is: if power usage is beneficial, how do I get someone to be a power user? One thought you might have is maybe it's limited. Maybe some people are set out to be early adopters of AI, and they're going to grab on to new tools and they're going to be better. All I need to do is find them. But I actually think that that is a limited view. What we should be thinking about is: how do we make everyone be power users? There shouldn't be any limitation on the benefit that people can get from these tools.
Prasad Setty: That is what this research was focused on. The Valence data scientists looked at this from a few different angles, and here's what we see currently. The interesting thing is that all of these are design choices. They don't necessarily depend on people's personality traits or their performance.
Prasad Setty: First is what that initial interaction is about. When you go into Nadia with an unbounded or a much more general problem, like team management or leadership development, you're more likely to end up being a power user compared to someone who goes in with a much more bounded problem. Help me with my calendar. Help me with goal setting. Those are finite tasks. They're all important, but one type of interaction seems to lead toward more power usage, and these are the more complex problems. If you're deploying Nadia and you have the initial communications be: you can save a little bit of time in your performance write-ups by using Nadia. You will get some usage, but it might turn out to be more of that casual kind of usage because it's limited in scope. Instead, if you say something like: bring Nadia your hardest leadership challenge that you're dealing with. That might lead to a different type of interaction over time.
Prasad Setty: The second one is, at the end of that initial conversation, do you walk out with something that's a clear next step or an artifact? You might look at this and say, some of this depends on Valence and the Nadia tool that they're building, and absolutely there's a lot of work here for the product people at Valence. But what you could do, if you're deploying Nadia, is to encourage your people to stay with Nadia until they get to this step. Think of the interaction between what I said earlier: you want to give Nadia your toughest challenge, but those are the ones that are perhaps more difficult to end up with a clear next step or artifact. You have to stay with Nadia for longer in order to get there.
Prasad Setty: The third thing we found is really about the next visit. We start off with an ongoing conversation about a difficult challenge. We end with a clear next step. The question is: when is the second interaction happening with Nadia? On the x-axis is the number of days between session one and session two, and on the y-axis is the percent of people who become power users. The average number of people who become power users is roughly in the 35% range. If people come back for that second visit within the first five days, they seem to get into this habit-forming pattern that enables more of them to become power users.
Prasad Setty: These are the three things you would want to think through. They work for the product people at Valence, but there's also work for you folks who are thinking about the communication and the sponsorship to encourage people to go in with a big juicy problem, to stick with Nadia until you get clear next steps and artifacts, and to encourage return visits in a more time-ordered manner.
Jennifer Carpenter: How we brought that to life at ADI, and I see Perley is on the line, I saw her in the audience, I want to give a shout-out to Perley and Shanae on my team who are listening in, is working with the Valence team. We have monthly sessions where I joke it's like the birthday balloon, trying to keep the birthday balloon up in the air, where we're bringing meaty things that Nadia can help people with each month. It's also teaching people that this assistance has never been next to them before. They don't know that they can get coaching on their engagement survey and developing a plan to improve engagement on their team with their own coach, with their own results. They don't know that Nadia can help them with that, or helping them revisit their goals, or have a tough conversation with their boss, or prepare to be promoted, or whatever the case might be.
Jennifer Carpenter: One thing to think about to get to the outcomes that Prasad was talking to you about: think about what's going on in your own organizations at the time, and offer a moment of reflection, and invite people to come and learn how this tool can help them. We've been seeing really great engagement to keep the conversation going, because people are also having to rewire how they work. They've never had this type of assistance before, so it's reminding them and coming up with new and novel and calendar-relevant ways that they're finding use and usefulness.
Ellie Wildman: That's powerful. To both of your points and Prasad: it's a combination of things that we're trying to do on our end. Understand what that first interaction looks like best, and then tailor Nadia for that depending on the audience from the product side, as well as some of the behavior change that goes on for deployment. Jennifer, anything else you would add in terms of how to build a workforce of actual AI power users from a deployment standpoint?
Jennifer Carpenter: Like all of you, we are still figuring it out. I would say, be patient. Really stop and think about what questions you're seeking the answers to. We have so many questions and not nearly enough answers, but we've tried to be mindful about what we want to know more about to help our people, because that's our job. This is how we've been studying it. I would recommend find partners like Valence and others that are going to help you think through how you get answers to the questions you know you need to be smarter about, to help your organizations through this time of change.
Jennifer Carpenter: I was a math major undergrad, so I lean into the data. But you can find just as insightful insights by talking with your people. Sometimes getting out of our offices and talking to the people that we serve is really important and useful. Measuring impact is important, but measuring impact doesn't mean you have to do statistical analysis. It can also be getting out and listening and having your own listening strategies on how these tools are being used, and most importantly, are they useful for your people? We're finding great value in this particular product, but I have been guilty in the past of rolling stuff out and thinking I rang the bell and no one really finds it that helpful. I would encourage us all: don't make the same mistakes I've made in the past. Make sure you are measuring early and often and listening to the people to make sure that the products we're putting out there are helping people.
What's Next: Cognitive Quality and the Future of AI Coaching Research
Ellie Wildman: A lot of people are asking themselves: how do I measure? Hopefully some of the aspects you all laid out today will help people answer that question in their mind. As we're thinking about audience Q&A, if anyone has any questions, please put them in the chat. I'll continue to monitor it. Prasad, do you want to talk a little bit about what's next? I know you have some exciting questions top of mind.
Prasad Setty: As Jennifer said, the number of questions that we have is unbounded for sure. We now have this kind of usage model and interaction that we can mine for something that can be helpful to everyone of Valence's users. One of the thoughts going on in the AI and human usage community is really about what is called cognitive offloading. As these AI tools become much more prevalent, are people giving up their own thinking? Cognitive offloading itself can be valuable, but cognitive surrender is not. That is what we want to avoid. One of my thoughts is that every AI interaction either helps with development or leads to dependency. There is a design choice around how we ensure that it yields more development.
Prasad Setty: In that context, working with the Valence data scientists, they're thinking about this notion of cognitive engagement, the quality of cognition that people bring into these types of interactions with Nadia. We are trying to be methodical about measuring that to see how well people are thinking, how well they are framing, how well they are responding to AI. Are they very quick to accept what AI is offering them? Are they challenging it? All of those measures come up with this index that allows us to evaluate this notion of cognitive quality. Beyond that, we are looking at: can that cognition quality be enhanced through interaction with Nadia, as well as transferred to other environments where you're not just talking to AI but you're talking to humans as well? Lots of exciting stuff coming up, so stay tuned for future conversations where we can unpack that.
Ellie Wildman: Absolutely. To your point, Prasad: more questions than answers, but all exciting questions that we can all work through together. Being mindful of time, if you all have questions, please post them in the chat. We can follow up after this. In terms of what's coming next, we have a few additional webinars coming. Nadia 2.0 on April 29. On May 7, AI Coaching for the Frontlines, which is near and dear to my heart. This is a fantastic way to scale coaching to people that typically don't have access to it, and what that coaching looks like is actually quite different. On May 21, What Makes a High-Performing Manager, with Hein, who I believe is on the call today calling in from the Caribbean. Feel free to sign up for all of these. Thank you so much, Jennifer and Prasad, for not only the time today but the amazing work you've been putting in and the thought leadership here. Thank you, everyone, and appreciate your time.
Most organizations measure AI coaching success by one metric: did people log in? That's like measuring a gym membership by badge swipes. It tells you nothing about who got stronger.
Analog Devices took a different approach. Working with former Google people analytics leader Prasad Setty, ADI studied 45,000+ AI coaching sessions over 15 months to understand what meaningful engagement actually looks like — and whether it connects to real performance outcomes.
In this session, you'll learn:
- How ADI embedded AI coaching into performance reviews, feedback, and daily work across 10,000+ employees in 30+ languages
- What the Power User Index measures — and why volume, breadth of use, and conversation quality matter more than adoption rates
- The research finding: power users are 31% more likely to move up a performance band
- Why managers are the single biggest lever for driving team-wide AI coaching adoption (the 2x effect)
Your speakers: Jennifer Carpenter is the Global Head of Talent at Analog Devices, where she led one of the largest enterprise AI coaching deployments in the world. Prasad Setty is the former VP of People Analytics at Google who led Project Aristotle; he now leads research at Valence.
Contact Us

.png)