Becoming the Chainsaw (Generative AI) – Wielding Lumberjack (Product Team) of Your Dreams

Lumberjack using a chainsaw to cut two large tree trunks on the ground
Photo by Abby Savage on Unsplash

“AI won’t take your job… somebody using AI will”

Dr. Richard Baldwin

Imagine you’re a lumberjack in 1960, master of multiple kinds of axes and handsaws—I humbly suggest this soundtrack as you read.  You trained your team in your way of doing things, profits are soaring. Along comes the chainsaw… you’re ruined, right?  Chainsaws are going to take your jobs… 

Perhaps with that attitude, all is lost – but we know that forward-thinking lumber companies used their forestry expertise to become chainsaw experts.  Those that stuck with axes alone lost.

LLM tools like Bard and chatGPT are no different today. They’re powerful generative AI tools that will transform how we do our work.  Success, and wisdom, in using them hinges on our expertise in our field, though.

Read on and I’ll share some how our team has been playing, learning, and winning with LLM.

Reinventing the (backlog) wheel

It is very tempting for product teams to treat feature and story writing as a creative writing effort. There’s no denying creativity is required—there’s a complex audience (product owners, developers, designers, and more) needing to all align on an abstract concept (what software we’re going to build).  

However, starting from a blank backlog page wastes creative effort. Instead, we use a consistent template (like the one shown below) for every feature and story. This is more efficient for writers and readers, allowing us to focus creative energy on higher value work – like ensuring the content sensibly & clearly conveys what we want to build, and that the results will be valuable.

Value Description
In order to…
We will…
Impacted persona(s)…

Acceptance Criteria
Given… (context)
When… (action that triggers these ACs being needed)
Then… (list of clear, verifiable ACs)

Notes
“Out-of-scope” limits
Assumptions (and what we’ll do if they prove false)
Constraints / dependencies
Links to resources / technical guidance / UX specs

Check out this webinar if you want more about using templates, alongside other pro-tips for writing better requirements.

Stacked tree logs with standing trees in background
Photo by Markus Spiske on Unsplash

Leveraging generative AI’s creativity

We’re finding LLM significantly accelerates backlog refinement.  I have a copy/paste prompt I use often, that includes the template above and a few lines about how I approach story writing.  Add in a few lines about the specific feature you’re building, and get back draft backlog items.  You can choose to give the LLM feedback, or just take its draft and edit them yourself .  Trees… I mean backlog items that would have taken hours can now be cut down… ready for the team… in minutes.

There’s a caveat here – when you share information with an LLM tool like chatGPT, it’s not private.

OpenAI (who didn’t sign an NDA) has that data.  For many features, the content isn’t sensitive at all.  For others, the sensitive information can be removed.  For particularly sensitive features, though, I don’t recommend using LLM for backlog drafting.  It’s safer to do the process manually.

A lot of work doesn’t guarantee a lot of value

In Continuous Discovery Habits, Teresa Torres advocates for managing outcomes, instead of outputs.  Rather than looking at metrics that tell you how much activity happened (outputs), focus on metrics that tell you how much value you achieved (outcomes).  In product teams, you’re often optimizing for backlog items completed (output), with little accountability to value created for the business/customer (outcomes).

At Integral, we talk a lot about “speed-to-value.” While we keep the backlog moving quickly, a giant pile of ‘done’ features or stories is not enough for us to declare victory. Success is how quickly our software helps the business win. The desired business value is front and center on every backlog item, per the template above. This enables developers, designers, and managers to share insights on the fastest and best way to deliver value. We can quickly clarify needs with the business and align on a speed-to-value delivery plan. Because we slice work thin, we also quickly see how our software is / isn’t delivering value, and adjust next up features accordingly.

Clean clothes, without airing dirty laundry?

Using LLMs to help deliver outcomes can be a bit touchier than backlog drafting.  You’re navigating sensitive strategy and business topics. What is the wise path here?

Many enterprises now have their own LLM interfaces.  Find out if your organization or client does, follow the usage policies, and try these prompts:

Are these the best way to meet the desired business outcomes?

What ways could we slice and order the work to deliver the most value, quickest?

What questions would help us clarify and focus on the desired business value/outcomes?

Your team can take what it finds compelling and use it, and ignore what doesn’t resonate.

If your only option is a public LLM, there’s still safe ways to accelerate outcomes by genericizing your context.  You can imagine you’re in a busy restaurant with a friend who has relevant expertise.  They’re not on the project and haven’t signed an NDA (nor have all the people around us), so you can’t say:

I’m working for on this thing that using this …

However, you can say

I’m working on a building … any advice on how to get the best outcomes faster?

As you craft this kind of prompt, ask yourself “could anyone take the information I’m giving to an LLM, plus any other publicly available information and figure out what I’m talking about?”  Make sure the answer is ‘no.’ 

LLM can still play a role, even if it feels hard to get to that ‘no’ or your work is too sensitive to share generically.  Embed LLM as a business domain expert for your product team.  Try this:

  • Have a conversation with the LLM about your business domain and the problems you’re trying to solve
  • Use the ‘share’ feature to give your team a link to that convo
  • Encourage the team to ask specific follow-up questions as they arise

This added domain knowledge empowers your product team to ask better questions about business outcomes and better optimize features and stories to deliver value. 

Timelines driven by purpose, supported by assumptions

Raise your hand if these lines feel familiar: 

  • Well when will it be done?
  • We have to hit our deadline!
  • What will it take to get it done before the deadline?
  • It has to be done faster!

We know speedy delivery alone doesn’t guarantee valuable software, but deadlines can feel like easier-to-use accountability measures.  Organizations also need timelines.  Estimates allow coordination of E2E integrations and marketing for new features.  Delivery deadlines help the business plan its investments, manage expected ROI, and optimize revenue so we all get paid.

The winning secret is not getting rid of estimates – it’s making estimates and deadlines more meaningful.  Using the outcome-based feature and story template discussed above helps.  If a timeline is a key business outcome, it can be included right at the top of the backlog item, e.g., “in order to… | we will…”:

  • “migrate vendors to V2 before our V1 contract ends on Dec 1 | complete V2 API updates in Q3 to allow Q4 focus on migration”
  • “support a Q2 marketing push | add new website section by end of Q1”

IT teams also hesitate on giving estimates because it feels risky.  We find it helpful to document our assumptions about significant risks in the features and stories, in our “Notes” section, e.g.:

  • “This new feature will reuse the code/pattern from existing feature X, we will write new stories if we find this doesn’t work”
  • “The complexity of this feature will require full focus of team for one full sprint to get everyone aligned on the approach – during this sprint, unplanned work and backlog churn will likely result in significant delivery delays”

These assumptions give everyone clarity on what the product team is seeing in the delivery weeds.  Business and IT leaders can now help unblock, maintain focus, and/or clarify priority for the product team.  The product team can work with less anxiety that they’re going to be blamed when risks don’t play out in our favor.

Finally, lean agile delivery also plays a key role here – giving quicker real-life feedback to the team.  The key ingredients are thin slicing features and stories, demos at least every iteration to the business & all stakeholders, and pushing to PROD every sprint.  Because of this we regularly see if we are on track to our deadline and learn from how our assumptions played out.  Delivery teams can identify unforeseen risks more quickly as the delivered software reveals them.  Early adjustments keep delivery, IT leaders, and business aligned and help everyone stay on track to delivering needed outcomes on predictable timelines.

Check out this webinar if you want more on derisking strategies, and this one has great insights on making better estimates.

Reality-checking timelines, estimates, and assumptions

LLM is a fantastic partner for working through assumptions & risks and establishing ambitious, realistic, and predictable delivery timelines.  It can help you check your assumptions, and give you greater confidence in our estimates.  Try these prompts in your LLM convo:

  • I’m a business/IT leader and need software <with these features>. How long does it typically take to build?  How can I help my delivery team get this done faster?  What risks, assumptions, and desired outcomes will be most important to clarify with the IT delivery team?
  • We’re a software development team, building <features that…>. We’ve made <these assumptions> about delivering that feature?  Did we miss anything?  What other risks should we consider?
  • We’re building <features that…>.  What are some ways to deliver value faster?  What might be the thinnest slices that we can deliver first to help us validate our assumptions and address our biggest risk?

I also recommend giving the LLM feedback if it’s initial answers don’t fit what you need and/or asking follow-up questions.  Often the second and third responses are magnitudes more helpful than the first one.

Want to be the lumberjacks that use chainsaws to win?

If you are looking to optimize your backlogs for speed-to-value and help your delivery teams get more confident in using generative AI tools to get winning software outcomes, Integral is here for you. Our team of software consultants has the expertise you need to win in the new world of chainsaws… erm… LLM and other generative AI tools. Get in touch with us at https://integral.io/contact/ or at hello@integral.io.

It’s time to build your great idea.

Author

  • paul hudson mack (he/they) is a Senior Product Manager at Integral. Over the past 15 years, Paul has led initiatives in fintech, security, social impact, leadership development, and executive facilitation.  His clients’ industries have included automotive, government, non-profit, higher education, and professional services. Earlier in his career, he taught Bio and AP Bio in Detroit (through Teach for America) and did some mainframe Cobol programming. Outside work, Paul enjoys raising his two children in the house he renovated near downtown Detroit and traveling in his camper van (and on airplanes) to off-the-beaten-path locations.  He refuses to wear white socks or flannel and is always up for a glass of bourbon or dry red wine.

Scroll to Top
Copy link