If you want to wow your customers with generative AI, you need to embrace a responsible tech mindset. Credit: Ollyy / Shutterstock Over the past 12 months, generative AI has generated fervor and fear in almost equal measure. We’ve all marveled at the tech’s ability to pass bar exams or create award winning photography. But that level of ingenuity is deeply unsettling for many consumers, who perhaps prefer to know that the humans are still at the wheel. This presents a conundrum for many of today’s business leaders. There are very real opportunities to innovate using generative AI, but there are equally many consumers who are uncomfortable with the technology. How can you seize that opportunity without alienating your customer base? At Thoughtworks, we wanted to explore consumer attitudes towards genAI and identify ways forward for our clients. The good news is that of the 10,000 people we surveyed from across the globe, 83% agreed that businesses can use genAI to be more innovative and to serve them better. But an even greater proportion (93%) have ethical concerns about genAI. Those concerns include things such as the troubling emergence of deepfakes, the potential for losing the ‘human touch,’ and data privacy worries, among others. When questioned, our survey respondents said their top three priorities for businesses deploying genAI were: To clearly outline how data is used To ensure no illegal content was generated To disclose when content was generated by genAI As with most high-profile tech trends, regulators have been quick to flex their muscles. That may come as welcome news to many consumers — indeed 82% of respondents said they thought governments have a vital role in ensuring the safety of genAI use. However, relying solely on regulators is not the most effective approach to achieving this. Technology — and genAI in particular — advances at such breakneck speed, it’s nigh on impossible for regulations to keep up. Too often we’ve seen regulators struggle to keep pace with technology and enact well-meaning but cumbersome legislation. Instead of waiting for regulations to emerge, which they undoubtedly will, businesses should take the lead by being open and transparent about their use of genAI and about how they’ll build trust in their use of genAI. That sounds simple enough, right? But when it comes to genAI, it’s not enough to just have good intentions: If you want to build trust, you need a very deliberate plan for how you’ll ensure principles of ethics, fairness, and inclusivity. Businesses need to adopt ‘responsible technology’ practices, which will give them a powerful lever that enables them to deploy innovative genAI solutions while building trust with consumers. Responsible tech is a philosophy that aligns an organization’s use of technology to both individuals’ and society’s interests. It includes developing tools, methodologies, and frameworks that observe these principles at every stage of the product development cycle. This ensures that ethical concerns are baked in at the outset. This approach is gaining momentum, as people realize how technologies such as genAI, can impact their daily lives. Even organizations such as the United Nations are codifying their approach to responsible tech. Consumers urgently want organizations to be responsible and transparent with their use of genAI. This can be a challenge because, when it comes to transparency, there are a multitude of factors to consider, including everything from acknowledging AI is being used to disclosing what data sources are used, what the steps were taken to reduce bias, how accurate the system is, or even the carbon footprint associated with the genAI system. To be transparent, you will need to provide the right amount of information, in the right format, to meet the needs of different audiences. It helps to consider your genAI use through three lenses: Technical function: What does the system actually do? Communicated function: What do developers or deployers say it does? Perceived function: What do users of the system believe it does? This approach can help ground the complexity of genAI systems in a way that supports meaningful transparency and social responsibility. And it can help you build trust with consumers — many of whom want the innovation that genAI can support. Related content brandpost Sponsored by Broadcom Business and IT alignment: How people-centric planning fuels real progress With a people-centric planning approach, teams create value streams that fuse business and IT staff, who have one set of shared goals. Leaders of different functions prioritize according to negotiated agreements—accelerate your organizational e By Sonja Furneaux, ValueOps Global Architect, Broadcom Jun 03, 2024 4 mins IT Leadership news Microsoft vs CISPE: No agreement in cloud dispute Since February, European cloud providers have been negotiating with Microsoft about its allegedly anti-competitive practices, but there is still no agreement in sight. By Martin Bayer Jun 03, 2024 3 mins Regulation Software Licensing Cloud Computing brandpost Sponsored by Palo Alto Networks Is there a natural contradiction within AI-driven code in cloud-native security? Unveiling the duality: Harnessing AI's potential while safeguarding cloud-native security By Amol Mathur, SVP & GM of Prisma Cloud, Palo Alto Networks Jun 03, 2024 5 mins Artificial Intelligence Security feature Getting infrastructure right for generative AI Ensuring a cost-effective approach for delivering the massive storage, bandwidth, and computing resources necessary for genAI is no easy task. Here’s how innovative IT leaders are coping. By Stan Gibson Jun 03, 2024 8 mins Generative AI Budgeting Infrastructure Management PODCASTS VIDEOS RESOURCES EVENTS SUBSCRIBE TO OUR NEWSLETTER From our editors straight to your inbox Get started by entering your email address below. Please enter a valid email address Subscribe