Build an AI strategy that survives first contact with reality

For one of our clients, one of the world’s leading snack food producers, AI is supporting elements of recipe creation, which is a historically complicated task given the dozens of possible ingredients and ways to combine them. By partnering product specialists with AI, the organization can generate higher quality recipes faster. The organization’s system has reduced the number of steps needed to develop recipes for new products from 150 (on average) to just 15. Now, it can more quickly delight customers with new products and new experiences to keep them connected to the brand.

Notably, AI does not work in isolation but rather augments skilled teams, providing guidance and feedback to further improve outcomes. This is a hallmark of successful AI solutions: They are ultimately designed for people, and a multidisciplinary team that comprises domain and technical expertise as well as a human focus, to enable organizations to get the most value out of them.

Guardrails matter

When thinking about how to get the most from AI, your AI strategy should also consider the appropriate guardrails.

As solutions become more sophisticated—and embedded more frequently and deeply into software, products and day-to-day operations—their potential to allow people to make mistakes increases, too. One common antipattern we see is when humans become unintentionally over-reliant on fairly stable AI—think of the developer who doesn’t check the AI-generated code, or the Tesla driver lulled into a false sense of security by the car’s autopilot features.

There needs to be careful governance parameters around usage of AI to avoid that type of over-dependency and risk exposure.

While many of your AI experiments might produce exciting ideas to explore, you need to be mindful of the tools that underpin them. Some AI solutions are not built following the kind of robust engineering practices you’d demand for other enterprise software. Carefully think about which ones you’d be confident deploying into production.

It helps to test AI models in the same way you would any other application—and don’t let the rush to market cloud your judgment. AI solutions should be supported by the same continuous delivery principles that underpin good product development, with progress made through incremental changes that can be easily reversed if they don’t have the desired impact.

You will find it helps to be up-front about what you consider to be a “desired” result—it may not only be financial metrics that define your success. Depending on your organization’s context, productivity and customer experience might also be important considerations. You might look at other leading indicators, such as your team’s awareness of the potential of AI and their comfort level in exploring, adopting, or deploying AI solutions. These factors can give you confidence that your team is on track toward improving any lagging indicators of customer experience, productivity, and revenue. However you approach it, you’re more likely to succeed if you’ve identified those metrics at the outset.

Finally, for all the bluster about the threat AI poses to people’s jobs—or even to humanity at large—you’ll do well to remember that it’s your people who will be using the technology. Consider the human side of change, where you strike a balance between encouraging people to adopt and innovate with AI while remaining sensitive to the problems it can present. You might, for instance, want to introduce guidelines to protect intellectual property in models that draw on external sources or privacy, where you may be using sensitive customer data. We often find it’s better to give our people a say in where AI augments their work. They know, better than anyone, where it can have the most impact.

This content was produced by Thoughtworks. It was not written by MIT Technology Review’s editorial staff.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *