Contributing writer

5 methods to adopt responsible generative AI practice at work

Feature
Apr 05, 20238 mins
Artificial IntelligenceBusiness IT AlignmentData Management

Use generative AI to make people more productive rather than try to replace them. Then you’re more likely to benefit from the technology instead of automating your way into trouble.

businesspeople in meeting 147069578
Credit: Thinkstock

Midjourney, ChatGPT, Bing AI Chat, and other AI tools that make generative AI accessible have unleashed a flood of ideas, experimentation and creativity. If you want to harness that in your organization, questions remain about where to start putting AI to work and how to do it without running into ethical dilemmas, copyright infringement, or factual errors. A good place to start is use it to help people who are already expert in their area to save time and be more productive.

There are many other different places to quickly start using generative AI, and it’s being incorporated into several tools and platforms your organization may already use. So you’ll want to think about setting out guidelines for how to experiment with and adopt these tools. Here are five key areas where it’s worth considering generative AI, plus guidance on finding other appropriate scenarios.

1. Increase developer productivity knowhow

Coding is often considered to be somewhere between an art and a science, but there’s a lot of work in programming that’s routine and repetitive. The rise of cloud platforms and module repositories means that writing modern applications is as much about gluing together components and APIs, refactoring existing code, optimizing environments, and orchestrating pipelines as it is about coming up with algorithms. A lot of that work is ripe for automation and AI assistance, but, again, you need to know how and where you’re using these tools to monitor impact and effectiveness. You could start with single-use tools that speed up specific, common tasks before moving on to full-scale coding assistants.

Documentation is both critical and frequently neglected: not only can you have generative AI document a codebase, but you can then build a chat interface into your documentation where developers can ask how it works and use it, or just replace the usual search box. That turns generic documentation into conversational programming where the AI can take your data and show you how to write a query, for example.

Testing is another area that tends to get neglected, so automated unit test generation will help you get much broader test coverage. Commit bots can also help developers write messages that include enough information to be useful to users and other developers, and generative AI could do the same for IT staff documenting upgrades and system reboots.

It’s also key to generate backend logic and other boilerplate by telling the AI what you want so developers can focus on the more interesting and creative parts of the application. You should also use generative AI to write your own codemods, (scripts that automate repetitive, time-consuming tasks in large code bases), or ask it to help fix contribution voice and tone to better suit house style. Coding assistants like GitHub Copilot and IDEs that build in large language models (LLMs) can do all of that and more but shouldn’t replace a developer; they need to understand and assess code they haven’t written (and the context it runs in) in case it contains security vulnerabilities or performance bottlenecks, omissions, bad decisions or just plain mistakes since it’s generating code based on learning from repos that might have any or all of those issues. Think about how to track AI-generated code in your organization so you can audit it and assess how useful it is. Developers report being more productive and less frustrated when using GitHub Copilot, and Microsoft says 40% of the code Copilot users check in is AI-generated and unmodified. Currently that provenance is lost once a developer leaves their IDE session, so think about internal guidance on recording how AI tools are used.

2. Uplevel low code and no code business users

Although business users don’t have the expertise to evaluate the code produced by an AI assistant, low code and no code environments are highly constrained, and the places where they integrate generative AI tools are far less likely to be problematic.

Low code apps frequently need to retrieve and filter data. And low code platforms already add generative AI features that can generate lookup queries or sanitize the data that come back—like programmatically adding missing zip codes—which allows business users without database expertise to get further without needing to stick to prebuilt components or wait for a professional developer to build the query string for them. Open-source tools like Census GPT make it easier to query large public data sets.

Code assistants aren’t just for pro developers, either. Wix Artificial Design Intelligence (ADI) can build a whole website for you, mixing code generation and generative design; Uizard does the same for website and app prototypes; and Fronty turns images into HTML and CSS while Express design in Microsoft Power Apps turns a hand drawn sketch or Figma file into a working app, complete with backend.

Most of the generative AI use cases organizations will be interested in are modules that can be called in a low code automation workflow so employees can adapt them to their specific needs. And platforms are already making ChatGPT and other OpenAI APIs available like any other component. However, be sure any warnings or guidance accompanying the text or images generated show up correctly in the low code environment, ideally with a way to give feedback, and that staff know your policy on whether any of this can be presented directly to customers without an employee reviewing it first.

3. Understand documents and data

Combining a custom version of ChatGPT with Bing has brought millions of new users to Microsoft’s search engine. But the way LLMs work means errors and ‘hallucinations’ will happen as they essentially autocomplete sentences and paragraphs to generate text that matches query prompts. And if the information you want doesn’t exist, the model will still attempt to create something plausible. Even when the information given is correct and matches what most experts in an area would say, responses may be incomplete, inaccurate, and if you’re not already an expert, you may not know what’s missing. These issues can be as much of a problem for enterprise search as they are for the public web; the forthcoming Microsoft 365 Copilot tool will try to deal with that by grounding queries in data from the Microsoft Graph of documents and entities, and providing references, but it might still miss important points you’ll need to add in yourself.

Start taking advantage of the opportunities to use LLMs to summarize and analyze documents, or generate text to explain concepts in more constrained scenarios where that information gets reviewed internally by people with expertise, rather than getting shown directly to your customers or other end users.

Generate a knowledge graph to visualize the connections and relationships between different entities as a way to help you understand a project, community or ecosystem. The Copilot tool coming to Excel promises an interactive way to get insights and ask questions about data in a sandbox that doesn’t change the underlying data, so any mistakes might take you down the wrong path but shouldn’t contaminate the original information for future analysis.

Storytelling with data is another effective way to communicate key trends and AI-powered analytics like Power BI’s Smart Narratives, which can find anomalies and contributing factors, and then explain them with charts and autogenerated descriptions. This avoids the problems LLMs have with math because the insights are derived by AI models like linear regression and then described by the language model. These kinds of ensemble approaches are likely to become more common. Similarly, security tools are starting to use language generation to explain threats, anomalies, and possible evidence of breaches detected by AI in clear, customized language that tells you what it means and what to do about it. In future, expect to be able to ask these kinds of tools questions and have them explain recommendations.

You can also make existing chatbots smarter and more flexible by moving beyond keywords and canned responses to something that sounds more natural and can automatically include new information as your knowledge base updates. Again, it’s tempting to use generative AI chatbots directly with customers to increase customer satisfaction and reduce costs, but this is a riskier scenario than using them inside your organization to surface useful information about benefits and other HR questions, for example. While a sassy chatbot will suit some brands, you don’t want to make the headlines because a customer received dangerous advice or got insulted by your chatbot. Using generative AI for agent assistance can get you the productivity boost with less of the risk.  

4. Speed up business user workflow

Meetings are supposed to be where business decisions are made and knowledge is shared, but far too much meeting value never leaves the room. AI tools like Microsoft Teams Premium, Dynamics 365 Copilot and the ChatGPT app for Slack create summaries and record the action items assigned to attendees and people who weren’t in the room and may not know what they’re on the hook for. This can also help avoid power plays around who’s asked to take notes and do other ‘office housework,’ for instance.

Being able to catch up with a busy Slack channel once a day could also improve productivity and work-life balance, but those who make the plans and decisions should take responsibility for making sure AI summaries, action items, and timescales are accurate. AI tools that summarize calls with customers and clients can help managers supervise and train staff. That might be as useful for financial advisors as for call center workers, but tools that monitor employee productivity need to be used with empathy to avoid concerns about workplace surveillance. User feedback and product reviews are helpful, but the sheer volume can be overwhelming and nuggets of useful information might be buried pages deep.

Generative AI can classify, summarize, and categorize responses to give aggregate feedback that’s easier to absorb. In the long term, it’s easy to imagine a personal shopping assistant that suggests items you’d want to buy and answers questions about them rather than leaving you to scroll through pages of reviews and comments. But again, businesses will need to be cautious about introducing tools that might surface offensive or defamatory opinions, or be too enthusiastic about filtering out negative reactions. Generative AI tools can read and summarize long documents, and use the information to draft new ones. There are already tools like Docugami that promise to extract due dates and deliverables from contracts, and international law firm Allen & Overy is trialling a platform to help with contract analysis and regulatory compliance. Generating semi-structured documents like MoUs, contracts, or statements of work may speed up business processes and help you standardize some business terms programmatically, but expect to need a lot of flexibility and oversight.

5. Get over writer’s block, spruce up designs

You don’t have to turn your whole writing process over to an AI just to get help with brainstorming, copywriting and creating images or designs. Office 365 and Google Docs will soon allow you to ask generative AI to create documents, emails and slideshows, so you’ll want to have policy on how these are reviewed for accuracy before they’re shared with anyone. Again, start with more constrained tasks and internal uses that you can monitor.

Generative AI can suggest what to write in customer outreach emails, thank you messages, or warnings about logistical issues, right inside your email or in a CRM like Salesforce, Zoho, or Dynamics 365, either as part of the platform or through a third-party tool. There’s also a lot of interest in using AI for marketing, but there are brand risks too. Treat these options only as a way to get started and not the final version before clicking send.

AI-generated text might not be perfect but if you have a lot of blanks to fill, it’s likely better than nothing. Shopify Magic, for instance, can take basic product details and write consistent, SEO-tuned product descriptions for an online storefront, and once you have something, you can improve on it. Also, Reddit and LinkedIn use Azure Vision Services to create captions and alternative text for images to improve accessibility when members don’t add those themselves. If you have a large video library for training, auto-generated summaries might help employees make the most of their time. Image generation from text can be extremely powerful, and tools like the new Microsoft Designer app put image diffusion models in the hands of business users who might balk at using a Discord server to access Midjourney, and don’t have the expertise to use a Stable Diffusion plugin in Photoshop. But AI-generated images are also controversial, with issues ranging from deepfakes and uncanny valley effects, to the source of training data and the ethics of using works of known artists without compensation. Organizations will want to have a very clear policy on using generated images to avoid the more obvious pitfalls.

Finding your own uses

As you can see, there are opportunities to benefit from generative AI in everything from customer support and retail, to logistics and legal services—anywhere you want a curated interaction with a reliable information source.

To use it responsibly, start with natural language processing use cases such as classification, summarization and text generation for non-customer-facing scenarios where the output is reviewed by humans who have the expertise to spot and correct errors and false information, and look for an interface that makes it easy and natural to do that rather than it just accepting suggestions. It’ll be tempting to save time and money by skipping human involvement, but the damage to your business could be significant if what’s generated is inaccurate, irresponsible, or offensive.

Many organisations are worried about leaking data into the models that might help competitors. Google, Microsoft and OpenAI have already published data usage policies that say the data and prompts used by one company will only be used to train their model, not the core model supplied to every customer. But you’ll still want to have guidance on what information staff can copy into public generative AI tools.

Vendors also say that users own the input and output of the models, which is a good idea in theory, but may not reflect the complexity of copyright and plagiarism concerns with generative AI, and models like ChatGPT don’t include citations, so you don’t know if the text they return is correct or copied from someone else. Paraphrasing isn’t exactly plagiarism, but misappropriating an original idea or insight from someone else isn’t a good look for any business.

It’s also important for organizations to develop AI literacy and have staff become familiar with using and evaluating the output of generative AI. Start small with areas that aren’t critical and learn from what works.

More on generative AI: