In this episode of the Knowledge Base Ninjas podcast, Netra shares how AI is transforming content creation and evaluation in technical writing. She explains the concept of an evaluation-first content lifecycle, where content is tested and validated before going live, applying a CICD-like approach to documentation. Netra discusses key metrics for content quality, retrieval, and impact, and how AI can help scale evaluation. She also talks about the evolving role of technical writers, the importance of error analysis, and how partnering with engineering and ML teams can improve content performance. Finally, she emphasizes the need for writers to experiment with AI, shares an example of how a well-written document failed to be retrieved correctly by an LLM, and explains the takeaways.
Watch the full podcast episode video here
You can listen to the full episode on Apple, Spotify and YouTube.
About Netra
-
Netra’s LinkedIn
-
Netra Pawar is a Lead Documentation Engineer at Meta with over 14 years of experience in tech writing. She began her career after college at Mindtree, initially working as a developer, where she was inspired by the tech writer on her team and the strong documentation culture. With a natural interest in language and writing, she found tech writing to be a perfect fit and has been passionate about building and improving product documentation ever since.
Quick jumps to what’s covered:
2:52 – What “Evaluation-First” Means in AI-Driven Content Creation
4:18 – Key Metrics and Frameworks for Evaluating Content
6:36 – Using AI to Simulate and Stress-Test Documentation
7:39 – Risks of Over-Reliance on AI in Content Evaluation
8:55 – The Future Role of AI in Content Evaluation
Transcript:
-
-
01:13 – 02:52 Netra’s Journey into Documentation Engineer
Gowri Ramkumar: Good day, everyone. Our guest today is Netra Pawar, Lead Documentation Engineer at Meta.
Hi, Netra. How are you doing today?
Netra Pawar: Hey, Gowri. I’m doing very well. And thank you so much for having me. I’m really looking forward to this conversation.
I love that we are talking about something so timely, which is how AI is changing the way we think about docs and content quality in general.
So, excited to be here.
Gowri Ramkumar: Fantastic. Yeah, I know you’ve got ample experience, is it 14 years of?
Netra Pawar: I think I’ve stopped counting. But, yeah, I’m sure it’s somewhere there. Yeah.
Gowri Ramkumar: Alright. What was the triggering point? Tell me a little bit about your background. How did it all start, and how are you enjoying this journey so far?
Netra Pawar: I think like most of us in the tech writing world, we’ve chosen this career by accident or just, like, got introduced after, you know, having done something else prior. So, my journey began on similar lines.
So I started, just after fresh out of college as a campus graduate. Started my journey with a company called Mindtree. And, I briefly worked as a developer. And that’s where I was introduced to this role in the first place. And, I was just in awe of the tech writer that was on my team. I think we were really lucky to have a really good doc culture in the product that I was part of, and I just loved how, like, she had, like, a holistic view of everything, about the product and just the technology in general.
And, also had like, natural inclination towards language and writing. So I feel like this would be like a great, how I got introduced and started my journey from there.
Gowri Ramkumar: Nice. Very nice.
-
02:52 – 04:18The Evaluation-First Approach to AI-Driven Content
Gowri Ramkumar: Now, you work for Meta and, what is evaluation first content lifecycle means in the context of AI-driven marketing or even content creation?
Netra Pawar: Right.
So, the way we’ve been thinking is, when we look at evaluation first, we mean that, we start designing content with having for intent fulfilment and retrieve ability from onset, right. So it’s like from day one, we start thinking about these concepts and, you know, making sure the content that we’ve written really answers everything that we set out to author, ensuring that our AI assistants are basically retrieving.
Because I think the way people consume content is also changing. So, people are like looking at, especially within my own org. So we have our in-house assistants or we see users going as the, I think the first surface is these AI assistants. Or instead of publishing first and then measuring later, we test content before it goes live. So we are using synthetic user intents that basically mirror how people actually search or ask questions. And we then validate that with really user queries from like I said, from places like our assistants, or like workplace bots, and it’s essentially the CI/CD mindset, but apply to docs, measure early and like continuously, learn and iterate, so the content gets smarter every time it’s used.
Gowri Ramkumar: Right.
-
04:18 – 05:47 The 3 Layers of Content Evaluation
Gowri Ramkumar: What kind of metrics do you measure? Or let me ask this way, what metrics of framework should teams use to evaluate content?
Netra Pawar: I think yeah, that’s a tough one, right?
So I, we kind of have it just, through experience, kind of learned these in the past few months. So we are looking at three layers of evaluation and metrics, right.
So first is where we look at content quality because like everyone knows that’s more important today now than ever. So we focus highly on the content quality. Is the content accurate and complete.
Second we look at like the retrieval quality. So which is basically can the AI or the search system find answers when a user asks a question, right? So that’s where the metrics like recall pattern or MRR come into play. So these are the ones that we kind of piggyback on.
And the third impact is basically is our content, because I work for products within the enterprise. Basically the enterprise products that matter. So we look at whether our content is helping to reduce support tickets or basically helping users finish that as faster.
So the key is to mix like human judgment and programmatic checks and, so basically humans test for usefulness and clarity, whereas you have these more automated tests, which looks at coverage and retrieval and like together they kind of tell the full story.
Gowri Ramkumar: Understand.
-
05:47 – 07:35 How LLMs Are Transforming Documentation Workflows
Gowri Ramkumar: Now, how do you see this Larger Language Models like GPT-5 changing how the teams approach the content quality, because you spoke about what kind of content is getting retrieved, and you constantly keep looking and then do a feedback, right? So do you think it’s changing now?
Netra Pawar: I think, when we talk about these frontier models, you can get as creative as possible with playing around with prompts and roles that you assign to LLMs, right?
So I think we are heavily focused on performing all sorts of iterations, all sorts of experiments that we can, to see how we can use AI to create content for us, to see how we can use it for quality checks. Like, I just touched on it at the start.
So basically, and if you’re going into editing, they can turn into like simulations, right? So instead of having like one author reviewing a page, you can upscale AI models to now simulate these thousands of questions through these golden datasets to see if your content really holds up. So it’s basically you can stress test your documentation now, and like spot early gaps, and basically see where steps are unclear or also you can ask where you have like poor chunking or it’s missing examples.
So basically, I feel like our role is shifting. And we kind of becoming more like curators, evaluation architects. At the bottom line, I feel that we’re telling AI what good looks like, right? And guiding how feedback loop should work across the whole doc system. So I think that the major way that I see AI being part of our tech writing workflows.
-
07:35 – 08:49 Risks of Relying Too Much on AI
Gowri Ramkumar: Not many people talk about this, right? What are the risks are there, if at all are there, any risks in relying too heavily on AI for content evaluation?
Netra Pawar: I think that’s a great question. Right. So, 100% yes, right?
So the biggest risk is false confidence, because these models are essentially trained on the content that’s there, right? And, this content is not perfect. We know that definitely, the models will carry forward whatever inaccuracies, because they’re not trained on like, perfect content, right?
So, and, I think the scary part about AI is that it can sound very convincing and very confident. So I think the biggest risk is false confidence. So that’s essentially why we always keep human in the loop. So we kind of take like a true two-pronged approach to evals as well, right? So we start with having domain experts and tech writers kind of perform more human based evals, and then, but this can’t scale, so we then have to also fall back on some of these programmatic evals, to do again, like for us to like scale this.
So basically, it’s you always have to have human in the loop. But then AI kind of gives you the speed, and like scale.
Gowri Ramkumar: Yeah. That’s absolutely, very well said.
Netra Pawar: Yeah.
-
08:49 – 10:17 The Future of AI in Content Evaluation
Gowri Ramkumar: So you started with AI as or we started with AI as a conversation. And now, how do you see AI’s role evolving in content evaluation over the next 3 to 5 years?
Netra Pawar: I feel this is like the one technology, which is like really surprised me. Like, it’s just the pace that it’s moving forward, we can’t really predict everything that it can possibly do. But the way I see it personally, I feel that with all these agents and all of these being spun up literally every single day, I feel that, these agents will be able to do a lot of more autonomous tasks on our content.
For example, what I mean here is you can have, like, a runbook of sorts or a user guidance with procedures, and then you could actually deploy these agents to go look at your content and like provision something autonomously, right? So stuff like that. And then, like the push-pull mechanism comes into play here, right? The paradigm. So you can have, you could actually have AI give you alerts.
For example, saying that this page might confuse users because it lacks x, y, z or it’s lacking in this context. So I feel it’s going to be able to do a lot of, but I maybe, I’m super optimistic. But it’s going to be able to do a lot of autonomous things is what I foresee in the future.
Gowri Ramkumar: Okay, okay. That’s great.
-
10:17 – 12:27 ⚡Rapid Fire Round
Gowri Ramkumar: So let’s move on to the Rapid-Fire Round.
So any knowledge base related resources you have recently consumed?
Netra Pawar: Well, I think I, I just follow a lot of folks. Like, we have some great tech influencers on LinkedIn, so I just follow a lot of folks in the AI space.
Some of my favorite ones are for evaluations, I follow Hamil Husain and Shreya.
So I think, and I think just within our own org we have such amazing talent and we like, I’m just like, learning every single day from my colleagues. So I think there’s not like one particular resource per se.
And then obviously we have the well, well-known blogs like Write the Docs and our own tech writers in the industry, and that we look at.
So but yeah, I think these are some of the influencers that I really follow.
Gowri Ramkumar: Okay, okay. There is AI as well.
Now, one word that comes to your mind when you hear “Documentation”?
Netra Pawar: I’m so bad with these rapid fire is what I’m understanding.
I feel like it’s just enabling, right?
So I feel at the end of the day, we kind of enabling, solving a human problem. So. Yeah.
Gowri Ramkumar: Enabling. Yeah.
A piece of advice you would give to your 20-year-old self?
Netra Pawar: Wow.
I would say, like, just like keeping, just like the ability to like, learn stuff, like just being open to, like, learning a lot of things, experimenting.
Like, not really, but so, for example, in my 20s, I think I would go back and thank myself for picking this career, to be honest. Yeah.
I mean, obviously, not everybody who started off as a developer would want to start writing. So I would go back and really thank myself because I feel it’s a very rewarding profile. So, yeah.
So that one, and then being just open to learning and networking, and just like. Yeah, being, yeah, open to, like, iterating and experimenting. I think that’s, and just have fun with it.
And I think a lot of our job also allows us to like, interview and just meet so many new people.
So I think, just. Yeah, just enjoying the role. Yeah.
Gowri Ramkumar: Okay.
-
12:27 – 15:21 Closing Thoughts: Embracing AI and Experimentation
Gowri Ramkumar: Now, I hope we were able to talk a little bit about what you wanted to cover, but if there is anything else to add, please feel free to add now.
Netra Pawar: I think we pretty much covered everything that, I think, on the topic that we had. But I feel in general, I see a lot of hesitance and skepticism about AI when it comes to like, the tech writing world.
But, my personal belief is that we got to embrace it. It’s gonna stay here. I just making, just experimenting with what works for you. Just getting yourself trained on how to prompt these Ais because it’s, there’s no one perfect way or, like, a playbook that tells you, like, hey, this is the only way you’re going to get a result, right?
So, that’s exactly what we do every day. Like, you just have to experiment. And then eventually you will start seeing those patterns as to why something worked for you. And then, yeah, you got to kind of, you got to be vocal and share that more widely with your teams, and just make, you know, like, basically share your successes. What worked and also what did not work, right? So that’s also important, like telling that, hey, I’ve tried this a bunch of times, but it doesn’t work XYZ and like going back and like performing these error analysis on say if your use cases is conversational AI, go back and see why something wasn’t retrieved.
I can give you a great example. So in my own or for example. So, we had like a really well written document and we know that the answer to the question was in the document, but for some reason the LLMs was just not pulling that. So we went back and like spoke to these teams who were really own the RAG. And then we saw that it was just written in a way, like we had some custom components, where the RAG was not able to, like, pull that, right?
So we saw like, okay, so this is not the way that we should be writing at times. Not saying that fully write for machines like you still have to write for humans at the end of the day. But like, just, like doing these error analysis to see why something doesn’t work, rather than taking something at face value when like, start questioning here, why is it not working? I have everything in my doc.
So, basically partnering more with your ML teams and your engineering teams. I think that’s where we’ll see, like, really good partnership. Yeah.
I think that’s one thing I wanted to add.
Gowri Ramkumar: Fantastic.
So thank you, Netra, for all the insights and thoughts. And it’s definitely adding a lot more points to think about when it comes to AI.
So once again, all the very best for your upcoming projects, and, take care.
Netra Pawar: Yeah. Thank you so much, Gowri.
Gowri Ramkumar: Thank you, thank you.
-
Disclaimer: This transcript was generated using AI. While we aim for high accuracy, there may be minor errors or slight timestamp mismatches.
Enjoyed this conversation?
Don’t miss listening to other episodes of our Knowledgebase Ninjas Podcast, where we invite documentation experts from all walks of the industry to share practical, real-world insights, experiences, and best practices. Subscribe for the latest episodes, and if you found this helpful, pass it to your colleagues!
Read Our eBook
Discover how AI is shaping the future of technical writing:
The Future of Technical Writing – AI’s Impact on Knowledge Management
Watch Webinars
Learn from experts and product specialists in our Webinar Library
Explore Case Studies
See how leading teams are using Document360 to build powerful knowledge bases: Customer Success Stories
Browse Resources
Access our full Resource Library — including blogs, videos, guides, and more
Try Document360 for Free
Experience how top teams create world-class documentation with ease.
Start Your Free Trial
Follow Us for Updates, Tips & Best Practices
LinkedIn | Twitter (X) | Facebook | Instagram | YouTube




– 

