top of page
Search

"So that's 1,500 nuggets right?" Why are we making AI harder than it should be?

Updated: Nov 11, 2024

In one of the more inedible (ahem) recent AI calamities, a fastfood giant's drive-thru bot added 260 orders (6 pieces each = 1,500!) to a customer's order. The entire project with a massive outsourcer ended up being mothballed. I wonder how much that cost both companies.... instead of sweet & sour sauce for those, maybe just sour??

ree

I've been involved in both successful and failed AI implmentations. From first hand and published perspectives, it's clear that in the rush to build "anything and everything in AI" we're often seeing the basics of good product development being thrown out the window.


Since it's especially easy to become distracted with the "how" of AI, we've got to remember that AI-based projects require staying true to sound product tradecraft:


  • What is the market pain point that we're looking to solve for our customers? Would our customers agree that it's one of their key pain points or have we convinced ourselves it is so that we can solve something with AI? Beware of the elegant solution searching for a pressing problem... they never surprise, delight or make money.


  • Approach the problem with an MVP mindset. Testing and releasing in phases is always a good idea but especially when your team is new to AI (aren't all teams still new to it??). Expecting to be wrong has to be baked in to the team's mindset. Judiciously use customer involvement and feedback to make sure you're staying on track for solving the pain point.


  • Identify and address risk earlier. We're all convinced that, at least for the near future, AI projects likely have more risk than those that use more familiar tech. A favorite SVPG construct is uncovering risk in three dimensions:

    • Value Risk (will the customer buy/use this?)

    • Usability Risk (will the needed user personas figure out how to use this?)

    • Feasability Risk (are we the team with the chops to implement this solution?)


Delivering in phases (per the above) is one good way to address risk earlier. Other techniques that can reduce the risk of AI projects include prototyping, end user discovery interviews and adding outside expertise to the team for early stage projects.


Note that none of the above are unique to AI but in the frenzy to show investors, board members, senior management, customers, sales teams, etc. that "we're kicking butt in AI", we're rushing past key product management principles that always increase our chance for success.

 
 
 

Comments


bottom of page