Digital Article | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com INCORPORATE your small business, form a corporation, LLC or S Corp. The SmallBiz network can help with all your small business needs! Mon, 10 Jul 2023 12:55:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://smallbiz.com/wp-content/uploads/2021/05/cropped-biz_icon-32x32.png Digital Article | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com 32 32 Let the Urgency of Your Customers’ Needs Guide Your Sales Strategy https://smallbiz.com/let-the-urgency-of-your-customers-needs-guide-your-sales-strategy/ Thu, 06 Jul 2023 12:25:24 +0000 https://smallbiz.com/?p=112807

When companies are creating profiles of possible target customers, there is a dimension they often overlook: the urgency of the need for the offering. This article provides a process for segmenting prospective customers in this fashion and creating a sales strategy.

Many business leaders believe that they fully understand their best target customers. They’ve developed clear profiles (a.k.a. personas) that are richly detailed with well-researched parameters, such as standard characteristics (e.g., age, education level, years at the company, role) or firmographic (e.g., annual revenues, number of employees, industry, geography, years in business). While such characteristics are important, they ignore another crucial characteristic: urgency of need.

A company that offers a software-as-a-service billing solution for small and mid-sized private dental practices may focus on classic demographics, such as the size of the practice (number of employees or number of dentists), the age of the practice (since older practices may more likely have outdated systems), or the amount of insurance billing the practice does each year.

These variables are useful in helping to produce a list of prospects, but they don’t determine which of these dental practices the sales team should call on first. If, however, the company added data that reflects which of these practices’ needs is most urgent — say, those that have advertised for billing and claims administration help more than twice in the past year (suggesting that they are struggling to keep up with billing) — salespeople would be able to prioritize their attention on these prospects.

The Four Segments

This needs-based approach entails segmenting potential customers into four segments:

  1. Urgent. The customer recognizes that it has an immediate need. (We just had another billing person quit!)
  2. Non-urgent. The customer recognizes the need, but it isn’t a high priority at this time. (We realize that our billing needs are changing and our current system will need to be revamped. We plan to start looking into this in the next year.)
  3. Currently met. The customer believes it already has an adequate solution to address the need at this time but recognizes it may not be a long-term solution. (We have an older billing system in place that still does the trick for now.)
  4. None. The customer simply has no need nor expects such need anytime soon. (Our small practice has a limited number of patients who pay out of pocket. Since all payments are made at the time of service, we simply don’t need a complex new billing system.)

This focus on the urgency of target customers’ needs may sound like common sense, but we have found in our work with B2B companies — from mid-sized firms to Fortune 50 giants in an array of industries such as financial services, enterprise information technology, utilities, industrial solutions, and health care technology — that they often fail to consider this dimension. Here is a process a firm can employ to apply this approach.

Identify new customers.

To identify prospects outside of your existing customer base, you can use available information. One is a source we mentioned: help-wanted ads that reflect a particular need.

But there are plenty of others. For instance, if a company sells inventory management solutions, a source of valuable data might be manufacturing industry merger-and-acquisition data, which could reveal companies with an urgent need to change or merge systems such as those for managing inventories. If a company sells quality-management solutions, a source of valuable data could be companies that are getting hammered for poor quality on social media.

Gather the necessary information.

Identifying your customers’ true urgency of needs requires looking beyond your typical demographic and firmographic profiling. This starts with an outreach initiative to talk to customers and prospects. The purpose is to ask questions to identify new target customer parameters that may be impacting the customer’s urgency of needs:

  • Frustrations. How urgent is the need to resolve these frustrations? Which frustration would best accelerate success if resolved?
  • Goals. Are your goals clear, consistent, reasonable, and measurable? Have your goals shifted recently?
  • Roadblocks. What keeps you from reaching your goals? (i.e., What keeps you up at night?) What is the magnitude of the impact of these roadblocks?
  • Environmental and situational factors. Are you experiencing any industry consolidation, organizational or executive management changes or instability, competitive changes, regulatory changes, and so on? What is the magnitude of the impact of these factors?
  • Technology factors. Are there new or changing technologies that will impact your ability to achieve your goals? Are you at risk due to technology end-of-life issues or incompatibility?

Assess your firm’s ability to serve lower-level segments.

Once a company has performed its needs-based segmentation effort, it should seek to answer the following questions about each of the four levels. The findings will dictate the sales and marketing strategy, level of investment and resource allocations.

Level 1. Urgent need

How quickly can we meet their need? How can we best serve them? Is the market opportunity large enough to focus only on these prospective customers? Given the customer’s urgency, how do we price our products to optimize margins without damaging relationships by appearing exploitive?

Level 2. Non-urgent need

Can we convince them that their need is more urgent than they currently believe? How do we effectively stay in touch with them so we remain top of mind when they perceive that their need has become urgent?

Level 3. Need currently met

Should we walk away from these prospects? If so, when and how do we touch base with them to see if their needs have changed? Or is there an opportunity to continue to work to convince them that their need is either more significant than they realize or could be much better addressed? If so, what’s the best approach to get them to reconsider their current situation and recognize their true need and its urgency?

Level 4. No need

Should we completely remove these contacts as any potential prospect? Is there some other need we may be able to address for them — perhaps with another product? Should we be in contact on a planned basis to see if their situation has changed? How do we best do that?

The ideal customers are those who clearly understand and recognize they have an urgent need for your offering. However, if that opportunity is not enough to meet the company’s sales volume target, it may be necessary to extend efforts beyond Level 1. Gaining the attention of these additional target customers, challenging their perceptions of their needs, and educating them on how your offering could benefit them will require resources. Consequently, a critical assessment is required to determine whether the opportunity outweighs the investment necessary to address customers in these other levels.

Test your new targets.

Before committing to a complete revamp of how your salespeople are prioritizing opportunities, select one or two experienced salespeople to help you test your new target customer parameters. Identify a few prospects that align to your revamped target profiles, and see how the selected salespeople are able to penetrate them.

Revamp your sales messaging and training.

Include prospective customers’ level of need in your sales messaging — the language that the sales team uses in its interactions with customers. Revamp your sales tools (materials such as brochures, technical papers, and customer testimonials used in the selling process) to include the urgency of need. And teach salespeople how to read and react to the prospective customer’s level of need and adapt their language appropriately.

By adding urgency of need to target customers’ profiles, companies can do more than differentiate their offerings more effectively. They can also identify new growth opportunities and successfully pivot away from slowing or tightening markets. They can accelerate the sales of new products. Last but not least, they can turn underachieving sales teams into strong performers.

]]>
How to Train Generative AI Using Your Company’s Data https://smallbiz.com/how-to-train-generative-ai-using-your-companys-data/ Thu, 06 Jul 2023 12:05:29 +0000 https://smallbiz.com/?p=112811

Many companies are experimenting with ChatGPT and other large language or image models. They have generally found them to be astounding in terms of their ability to express complex ideas in articulate language. However, most users realize that these systems are primarily trained on internet-based information and can’t respond to prompts or questions regarding proprietary content or knowledge.

Leveraging a company’s propriety knowledge is critical to its ability to compete and innovate, especially in today’s volatile environment. Organizational Innovation is fueled through effective and agile creation, management, application, recombination, and deployment of knowledge assets and know-how. However, knowledge within organizations is typically generated and captured across various sources and forms, including individual minds, processes, policies, reports, operational transactions, discussion boards, and online chats and meetings. As such, a company’s comprehensive knowledge is often unaccounted for and difficult to organize and deploy where needed in an effective or efficient way.

Emerging technologies in the form of large language and image generative AI models offer new opportunities for knowledge management, thereby enhancing company performance, learning, and innovation capabilities. For example, in a study conducted in a Fortune 500 provider of business process software, a generative AI-based system for customer support led to increased productivity of customer support agents and improved retention, while leading to higher positive feedback on the part of customers. The system also expedited the learning and skill development of novice agents.

Like that company, a growing number of organizations are attempting to leverage the language processing skills and general reasoning abilities of large language models (LLMs) to capture and provide broad internal (or customer) access to their own intellectual capital. They are using it for such purposes as informing their customer-facing employees on company policy and product/service recommendations, solving customer service problems, or capturing employees’ knowledge before they depart the organization.

These objectives were also present during the heyday of the “knowledge management” movement in the 1990s and early 2000s, but most companies found the technology of the time inadequate for the task. Today, however, generative AI is rekindling the possibility of capturing and disseminating important knowledge throughout an organization and beyond its walls. As one manager using generative AI for this purpose put it, “I feel like a jetpack just came into my life.” Despite current advances, some of the same factors that made knowledge management difficult in the past are still present.

The Technology for Generative AI-Based Knowledge Management

The technology to incorporate an organization’s specific domain knowledge into an LLM is evolving rapidly. At the moment there are three primary approaches to incorporating proprietary content into a generative model.

Training an LLM from Scratch

One approach is to create and train one’s own domain-specific model from scratch. That’s not a common approach, since it requires a massive amount of high-quality data to train a large language model, and most companies simply don’t have it. It also requires access to considerable computing power and well-trained data science talent.

One company that has employed this approach is Bloomberg, which recently announced that it had created BloombergGPT for finance-specific content and a natural-language interface with its data terminal. Bloomberg has over 40 years’ worth of financial data, news, and documents, which it combined with a large volume of text from financial filings and internet data. In total, Bloomberg’s data scientists employed 700 tokens, or about 350 billion words, 50 billion parameters, and 1.3 million hours of graphics processing unit time. Few companies have those resources available.

Fine-Tuning an Existing LLM

A second approach is to “fine-tune” train an existing LLM to add specific domain content to a system that is already trained on general knowledge and language-based interaction. This approach involves adjusting some parameters of a base model, and typically requires substantially less data — usually only hundreds or thousands of documents, rather than millions or billions — and less computing time than creating a new model from scratch.

Google, for example, used fine-tune training on its Med-PaLM2 (second version) model for medical knowledge. The research project started with Google’s general PaLM2 LLM and retrained it on carefully curated medical knowledge from a variety of public medical datasets. The model was able to answer 85% of U.S. medical licensing exam questions — almost 20% better than the first version of the system. Despite this rapid progress, when tested on such criteria as scientific factuality, precision, medical consensus, reasoning, bias and harm, and evaluated by human experts from multiple countries, the development team felt that the system still needed substantial improvement before being adopted for clinical practice.

The fine-tuning approach has some constraints, however. Although requiring much less computing power and time than training an LLM, it can still be expensive to train, which was not a problem for Google but would be for many other companies. It requires considerable data science expertise; the scientific paper for the Google project, for example, had 31 co-authors. Some data scientists argue that it is best suited not to adding new content, but rather to adding new content formats and styles (such as chat or writing like William Shakespeare). Additionally, some LLM vendors (for example, OpenAI) do not allow fine-tuning on their latest LLMs, such as GPT-4.

Prompt-tuning an Existing LLM

Perhaps the most common approach to customizing the content of an LLM for non-cloud vendor companies is to tune it through prompts. With this approach, the original model is kept frozen, and is modified through prompts in the context window that contain domain-specific knowledge. After prompt tuning, the model can answer questions related to that knowledge. This approach is the most computationally efficient of the three, and it does not require a vast amount of data to be trained on a new content domain.

Morgan Stanley, for example, used prompt tuning to train OpenAI’s GPT-4 model using a carefully curated set of 100,000 documents with important investing, general business, and investment process knowledge. The goal was to provide the company’s financial advisors with accurate and easily accessible knowledge on key issues they encounter in their roles advising clients. The prompt-trained system is operated in a private cloud that is only accessible to Morgan Stanley employees.

While this is perhaps the easiest of the three approaches for an organization to adopt, it is not without technical challenges. When using unstructured data like text as input to an LLM, the data is likely to be too large with too many important attributes to enter it directly in the context window for the LLM. The alternative is to create vector embeddings — arrays of numeric values produced from the text by another pre-trained machine learning model (Morgan Stanley uses one from OpenAI called Ada). The vector embeddings are a more compact representation of this data which preserves contextual relationships in the text. When a user enters a prompt into the system, a similarity algorithm determines which vectors should be submitted to the GPT-4 model. Although several vendors are offering tools to make this process of prompt tuning easier, it is still complex enough that most companies adopting the approach would need to have substantial data science talent.

However, this approach does not need to be very time-consuming or expensive if the needed content is already present. The investment research company Morningstar, for example, used prompt tuning and vector embeddings for its Mo research tool built on generative AI. It incorporates more than 10,000 pieces of Morningstar research. After only a month or so of work on its system, Morningstar opened Mo usage to their financial advisors and independent investor customers. It even attached Mo to a digital avatar that could speak out its answers. This technical approach is not expensive; in its first month in use, Mo answered 25,000 questions at an average cost of $.002 per question for a total cost of $3,000.

Content Curation and Governance

As with traditional knowledge management in which documents were loaded into discussion databases like Microsoft Sharepoint, with generative AI, content needs to be high-quality before customizing LLMs in any fashion. In some cases, as with the Google Med-PaLM2 system, there are widely available databases of medical knowledge that have already been curated. Otherwise, a company needs to rely on human curation to ensure that knowledge content is accurate, timely, and not duplicated. Morgan Stanley, for example, has a group of 20 or so knowledge managers in the Philippines who are constantly scoring documents along multiple criteria; these determine the suitability for incorporation into the GPT-4 system. Most companies that do not have well-curated content will find it challenging to do so for just this purpose.

Morgan Stanley has also found that it is much easier to maintain high quality knowledge if content authors are aware of how to create effective documents. They are required to take two courses, one on the document management tool, and a second on how to write and tag these documents. This is a component of the company’s approach to content governance approach — a systematic method for capturing and managing important digital content.

At Morningstar, content creators are being taught what type of content works well with the Mo system and what does not. They submit their content into a content management system and it goes directly into the vector database that supplies the OpenAI model.

Quality Assurance and Evaluation

An important aspect of managing generative AI content is ensuring quality. Generative AI is widely known to “hallucinate” on occasion, confidently stating facts that are incorrect or nonexistent. Errors of this type can be problematic for businesses but could be deadly in healthcare applications. The good news is that companies who have tuned their LLMs on domain-specific information have found that hallucinations are less of a problem than out-of-the-box LLMs, at least if there are no extended dialogues or non-business prompts.

Companies adopting these approaches to generative AI knowledge management should develop an evaluation strategy. For example, for BloombergGPT, which is intended for answering financial and investing questions, the system was evaluated on public dataset financial tasks, named entity recognition, sentiment analysis ability, and a set of reasoning and general natural language processing tasks. The Google Med-PaLM2 system, eventually oriented to answering patient and physician medical questions, had a much more extensive evaluation strategy, reflecting the criticality of accuracy and safety in the medical domain.

Life or death isn’t an issue at Morgan Stanley, but producing highly accurate responses to financial and investing questions is important to the firm, its clients, and its regulators. The answers provided by the system were carefully evaluated by human reviewers before it was released to any users. Then it was piloted for several months by 300 financial advisors. As its primary approach to ongoing evaluation, Morgan Stanley has a set of 400 “golden questions” to which the correct answers are known. Every time any change is made to the system, employees test it with the golden questions to see if there has been any “regression,” or less accurate answers.

Legal and Governance Issues

Legal and governance issues associated with LLM deployments are complex and evolving, leading to risk factors involving intellectual property, data privacy and security, bias and ethics, and false/inaccurate output. Currently, the legal status of LLM outputs is still unclear. Since LLMs don’t produce exact replicas of any of the text used to train the model, many legal observers feel that “fair use” provisions of copyright law will apply to them, although this hasn’t been tested in the courts (and not all countries have such provisions in their copyright laws). In any case, it is a good idea for any company making extensive use of generative AI for managing knowledge (or most other purposes for that matter) to have legal representatives involved in the creation and governance process for tuned LLMs. At Morningstar, for example, the company’s attorneys helped create a series of “pre-prompts” that tell the generative AI system what types of questions it should answer and those it should politely avoid.

User prompts into publicly-available LLMs are used to train future versions of the system, so some companies (Samsung, for example) have feared propagation of confidential and private information and banned LLM use by employees. However, most companies’ efforts to tune LLMs with domain-specific content are performed on private instances of the models that are not accessible to public users, so this should not be a problem. In addition, some generative AI systems such as ChatGPT allow users to turn off the collection of chat histories, which can address confidentiality issues even on public systems.

In order to address confidentiality and privacy concerns, some vendors are providing advanced and improved safety and security features for LLMs including erasing user prompts, restricting certain topics, and preventing source code and propriety data inputs into publicly accessible LLMs. Furthermore, vendors of enterprise software systems are incorporating a “Trust Layer” in their products and services. Salesforce, for example, incorporated its Einstein GPT feature into its AI Cloud suite to address the “AI Trust Gap” between companies who desire to quickly deploy LLM capabilities and the aforementioned risks that these systems pose in business environments.

Shaping User Behavior

Ease of use, broad public availability, and useful answers that span various knowledge domains have led to rapid and somewhat unguided and organic adoption of generative AI-based knowledge management by employees. For example, a recent survey indicated that more than a third of surveyed employees used generative AI in their jobs, but 68% of respondents didn’t inform their supervisors that they were using the tool. To realize opportunities and manage potential risks of generative AI applications to knowledge management, companies need to develop a culture of transparency and accountability that would make generative AI-based knowledge management systems successful.

In addition to implementation of policies and guidelines, users need to understand how to safely and effectively incorporate generative AI capabilities into their tasks to enhance performance and productivity. Generative AI capabilities, including awareness of context and history, generating new content by aggregating or combining knowledge from various sources, and data-driven predictions, can provide powerful support for knowledge work. Generative AI-based knowledge management systems can automate information-intensive search processes (legal case research, for example) as well as high-volume and low-complexity cognitive tasks such as answering routine customer emails. This approach increases efficiency of employees, freeing them to put more effort into the complex decision-making and problem-solving aspects of their jobs.

Some specific behaviors that might be desirable to inculcate — either though training or policies — include:

  • Knowledge of what types of content are available through the system;
  • How to create effective prompts;
  • What types of prompts and dialogues are allowed, and which ones are not;
  • How to request additional knowledge content to be added to the system;
  • How to use the system’s responses in dealing with customers and partners;
  • How to create new content in a useful and effective manner.

Both Morgan Stanley and Morningstar trained content creators in particular on how best to create and tag content, and what types of content are well-suited to generative AI usage.

“Everything Is Moving Very Fast”

One of the executives we interviewed said, “I can tell you what things are like today. But everything is moving very fast in this area.” New LLMs and new approaches to tuning their content are announced daily, as are new products from vendors with specific content or task foci. Any company that commits to embedding its own knowledge into a generative AI system should be prepared to revise its approach to the issue frequently over the next several years.

While there are many challenging issues involved in building and using generative AI systems trained on a company’s own knowledge content, we’re confident that the overall benefit to the company is worth the effort to address these challenges. The long-term vision of enabling any employee — and customers as well — to easily access important knowledge within and outside of a company to enhance productivity and innovation is a powerful draw. Generative AI appears to be the technology that is finally making it possible.

]]>
13 Principles for Using AI Responsibly https://smallbiz.com/13-principles-for-using-ai-responsibly/ Fri, 30 Jun 2023 12:15:51 +0000 https://smallbiz.com/?p=112198

The competitive nature of AI development poses a dilemma for organizations, as prioritizing speed may lead to neglecting ethical guidelines, bias detection, and safety measures. Known and emerging concerns associated with AI in the workplace include the spread of misinformation, copyright and intellectual property concerns, cybersecurity, data privacy, as well as navigating rapid and ambiguous regulations. To mitigate these risks, we propose thirteen principles for responsible AI at work.

Love it or loath it, the rapid expansion of AI will not slow down anytime soon. But AI blunders can quickly damage a brand’s reputation — just ask Microsoft’s first chatbot, Tay. In the tech race, all leaders fear being left behind if they slow down while others don’t. It’s a high-stakes situation where cooperation seems risky, and defection tempting. This “prisoner’s dilemma” (as it’s called in game theory) poses risks to responsible AI practices. Leaders, prioritizing speed to market, are driving the current AI arms race in which major corporate players are rushing products and potentially short-changing critical considerations like ethical guidelines, bias detection, and safety measures. For instance, major tech corporations are laying off their AI ethics teams precisely at a time when responsible actions are needed most.

It’s also important to recognize that the AI arms race extends beyond the developers of large language models (LLMs) such as OpenAI, Google, and Meta. It encompasses many companies utilizing LLMs to support their own custom applications. In the world of professional services, for example, PwC announced it is deploying AI chatbots for 4,000 of their lawyers, distributed across 100 countries. These AI-powered assistants will “help lawyers with contract analysis, regulatory compliance work, due diligence, and other legal advisory and consulting services.” PwC’s management is also considering expanding these AI chatbots into their tax practice. In total, the consulting giant plans to pour $1 billion into “generative AI” — a powerful new tool capable of delivering game-changing boosts to performance.

In a similar vein, KPMG launched its own AI-powered assistant, dubbed KymChat, which will help employees rapidly find internal experts across the entire organization, wrap them around incoming opportunities, and automatically generate proposals based on the match between project requirements and available talent. Their AI assistant “will better enable cross-team collaboration and help those new to the firm with a more seamless and efficient people-navigation experience.”

Slack is also incorporating generative AI into the development of Slack GPT, an AI assistant designed to help employees work smarter not harder. The platform incorporates a range of AI capabilities, such as conversation summaries and writing assistance, to enhance user productivity.

These examples are just the tip of the iceberg. Soon hundreds of millions of Microsoft 365 users will have access to Business Chat, an agent that joins the user in their work, striving to make sense of their Microsoft 365 data. Employees can prompt the assistant to do everything from developing status report summaries based on meeting transcripts and email communication to identifying flaws in strategy and coming up with solutions.

This rapid deployment of AI agents is why Arvind Krishna, CEO of IBM, recently wrote that, “[p]eople working together with trusted A.I. will have a transformative effect on our economy and society … It’s time we embrace that partnership — and prepare our workforces for everything A.I. has to offer.” Simply put, organizations are experiencing exponential growth in the installation of AI-powered tools and firms that don’t adapt risk getting left behind.

AI Risks at Work

Unfortunately, remaining competitive also introduces significant risk for both employees and employers. For example, a 2022 UNESCO publication on “the effects of AI on the working lives of women” reports that AI in the recruitment process, for example, is excluding women from upward moves. One study the report cites that included 21 experiments consisting of over 60,000 targeted job advertisements found that “setting the user’s gender to ‘Female’ resulted in fewer instances of ads related to high-paying jobs than for users selecting ‘Male’ as their gender.” And even though this AI bias in recruitment and hiring is well-known, it’s not going away anytime soon. As the UNESCO report goes on to say, “A 2021 study showed evidence of job advertisements skewed by gender on Facebook even when the advertisers wanted a gender-balanced audience.” It’s often a matter of biased data which will continue to infect AI tools and threaten key workforce factors such as diversity, equity, and inclusion.

Discriminatory employment practices may be only one of a cocktail of legal risks that generative AI exposes organizations to. For example, OpenAI is facing its first defamation lawsuit as a result of allegations that ChatGPT produced harmful misinformation. Specifically, the system produced a summary of a real court case which included fabricated accusations of embezzlement against a radio host in Georgia. This highlights the negative impact on organizations for creating and sharing AI generated information. It underscores concerns about LLMs fabricating false and libelous content, resulting in reputational damage, loss of credibility, diminished customer trust, and serious legal repercussions.

In addition to concerns related to libel, there are risks associated with copyright and intellectual property infringements. Several high-profile legal cases have emerged where the developers of generative AI tools have been sued for the alleged improper use of licensed content. The presence of copyright and intellectual property infringements, coupled with the legal implications of such violations, poses significant risks for organizations utilizing generative AI products. Organizations can improperly use licensed content through generative AI by unknowingly engaging in activities such as plagiarism, unauthorized adaptations, commercial use without licensing, and misusing Creative Commons or open-source content, exposing themselves to potential legal consequences.

The large-scale deployment of AI also magnifies the risks of cyberattacks. The fear amongst cybersecurity experts is that generative AI could be used to identify and exploit vulnerabilities within business information systems, given the ability of LLMs to automate coding and bug detection, which could be used by malicious actors to break through security barriers. There’s also the fear of employees accidentally sharing sensitive data with third-party AI providers. A notable instance involves Samsung staff unintentionally leaking trade secrets through ChatGPT while using the LLM to review source code. Due to their failure to opt out of data sharing, confidential information was inadvertently provided to OpenAI. And even though Samsung and others are taking steps to restrict the use of third-party AI tools on company-owned devices, there’s still the concern that employees can leak information through the use of such systems on personal devices.

On top of these risks, businesses will soon have to navigate nascent, varied, and somewhat murky regulations. Anyone hiring in New York City, for instance, will have to ensure their AI-powered recruitment and hiring tech doesn’t violate the City’s “automated employment decision tool” law. To comply with the new law, employers will need to take various steps such as conducting third-party bias audits of their hiring tools and publicly disclosing the findings. AI regulation is also scaling up nationally with the Biden-Harris administration’s “Blueprint for an AI Bill of Rights” and internationally with the EU’s AI Act, which will mark a new era of regulation for employers.

This growing nebulous of evolving regulations and pitfalls is why thought leaders such as Gartner are strongly suggesting that businesses “proceed but don’t over pivot” and that they “create a task force reporting to the CIO and CEO” to plan a roadmap for a safe AI transformation that mitigates various legal, reputational, and workforce risks. Leaders dealing with this AI dilemma have important decision to make. On the one hand, there is a pressing competitive pressure to fully embrace AI. However, on the other hand, a growing concern is arising as the implementation of irresponsible AI can result in severe penalties, substantial damage to reputation, and significant operational setbacks. The concern is that in their quest to stay ahead, leaders may unknowingly introduce potential time bombs into their organization, which are poised to cause major problems once AI solutions are deployed and regulations take effect.

For example, the National Eating Disorder Association (NEDA) recently announced it was letting go of its hotline staff and replacing them with their new chatbot, Tessa. However, just days before making the transition, NEDA discovered that their system was promoting harmful advice such as encouraging people with eating disorders to restrict their calories and to lose one to two pounds per week. The World Bank spent $1 billion to develop and deploy an algorithmic system, called Takaful, to distribute financial assistance that Human Rights Watch now says ironically creates inequity. And two lawyers from New York are facing possible disciplinary action after using ChatGPT to draft a court filing that was found to have several references to previous cases that did not exist. These instances highlight the need for well-trained and well-supported employees at the center of this digital transformation. While AI can serve as a valuable assistant, it should not assume the leading position.

Principles for Responsible AI at Work

To help decision-makers avoid negative outcomes while also remaining competitive in the age of AI, we’ve devised several principles for a sustainable AI-powered workforce. The principles are a blend of ethical frameworks from institutions like the National Science Foundation as well as legal requirements related to employee monitoring and data privacy such as the Electronic Communications Privacy Act and the California Privacy Rights Act. The steps for ensuring responsible AI at work include:

  • Informed Consent. Obtain voluntary and informed agreement from employees to participate in any AI-powered intervention after the employees are provided with all the relevant information about the initiative. This includes the program’s purpose, procedures, and potential risks and benefits.
  • Aligned Interests. The goals, risks, and benefits for both the employer and employee are clearly articulated and aligned.
  • Opt In & Easy Exits. Employees must opt into AI-powered programs without feeling forced or coerced, and they can easily withdraw from the program at any time without any negative consequences and without explanation.
  • Conversational Transparency. When AI-based conversational agents are used, the agent should formally reveal any persuasive objectives the system aims to achieve through the dialogue with the employee.
  • Debiased and Explainable AI. Explicitly outline the steps taken to remove, minimize, and mitigate bias in AI-powered employee interventions—especially for disadvantaged and vulnerable groups—and provide transparent explanations into how AI systems arrive at their decisions and actions.
  • AI Training and Development. Provide continuous employee training and development to ensure the safe and responsible use of AI-powered tools.
  • Health and Well-Being. Identify types of AI-induced stress, discomfort, or harm and articulate steps to minimize risks (e.g., how will the employer minimize stress caused by constant AI-powered monitoring of employee behavior).
  • Data Collection. Identify what data will be collected, if data collection involves any invasive or intrusive procedures (e.g., the use of webcams in work-from-home situations), and what steps will be taken to minimize risk.
  • Data. Disclose any intention to share personal data, with whom, and why.
  • Privacy and Security. Articulate protocols for maintaining privacy, storing employee data securely, and what steps will be taken in the event of a privacy breach.
  • Third Party Disclosure. Disclose all third parties used to provide and maintain AI assets, what the third party’s role is, and how the third party will ensure employee privacy.
  • Communication. Inform employees about changes in data collection, data management, or data sharing as well as any changes in AI assets or third-party relationships.
  • Laws and Regulations. Express ongoing commitment to comply with all laws and regulations related to employee data and the use of AI.

We encourage leaders to urgently adopt and develop this checklist in their organizations. By applying such principles, leaders can ensure rapid and responsible AI deployment.

]]>
3 Steps to Prepare Your Culture for AI https://smallbiz.com/3-steps-to-prepare-your-culture-for-ai/ Wed, 28 Jun 2023 12:15:53 +0000 https://smallbiz.com/?p=111951

The platform shift to AI is well underway. And while it holds the promise of transforming work and giving organizations a competitive advantage, realizing those benefits isn’t possible without a culture that embraces curiosity, failure, and learning. Leaders are uniquely positioned to foster this culture within their organizations today in order to set their teams up for success in the future. When paired with the capabilities of AI, this kind of culture will unlock a better future of work for everyone.

As business leaders, today we find ourselves in a place that’s all too familiar: the unfamiliar. Just as we steered our teams through the shift to remote and flexible work, we’re now on the verge of another seismic shift: AI. And like the shift to flexible work, priming an organization to embrace AI will hinge first and foremost on culture.

The pace and volume of work has increased exponentially, and we’re all struggling under the weight of it. Leaders and employees are eager for AI to lift the burden. That’s the key takeaway from our 2023 Work Trend Index, which surveyed 31,000 people across 31 countries and analyzed trillions of aggregated productivity signals in Microsoft 365, along with labor market trends on LinkedIn.

Nearly two-thirds of employees surveyed told us they don’t have enough time or energy to do their job. The cause of this drain is something we identified in the report as digital debt: the influx of data, emails, and chats has outpaced our ability to keep up. Employees today spend nearly 60% of their time communicating, leaving only 40% of their time for creating and innovating. In a world where creativity is the new productivity, digital debt isn’t just an inconvenience — it’s a liability.

AI promises to address that liability by allowing employees to focus on the most meaningful work. Increasing productivity, streamlining repetitive tasks, and increasing employee well-being are the top three things leaders want from AI, according to our research. Notably, amid fears that AI will replace jobs, reducing headcount was last on the list.

Becoming an AI-powered organization will require us to work in entirely new ways. As leaders, there are three steps we can take today to get our cultures ready for an AI-powered future:

Choose curiosity over fear

AI marks a new interaction model between humans and computers. Until now, the way we’ve interacted with computers has been similar to how we interact with a calculator: We ask a question or give directions, and the computer provides an answer. But with AI, the computer will be more like a copilot. We’ll need to develop a new kind of chemistry together, learning when and how to ask questions and about the importance of fact-checking responses.

Fear is a natural reaction to change, so it’s understandable for employees to feel some uncertainty about what AI will mean for their work. Our research found that while 49% of employees are concerned AI will replace their jobs, the promise of AI outweighs the threat: 70% of employees are more than willing to delegate to AI to lighten their workloads.

We’re rarely served by operating from a place of fear. By fostering a culture of curiosity, we can empower our people to understand how AI works, including its capabilities and its shortcomings. This understanding starts with firsthand experience. Encourage employees to put curiosity into action by experimenting (safely and securely) with new AI tools, such as AI-powered search, intelligent writing assistance, or smart calendaring, to name just a few. Since every role and function will have different ways to use and benefit from AI, challenge them to rethink how AI could improve or transform processes as they get familiar with the tools. From there, employees can begin to unlock new ways of working.

Embrace failure

AI will change nearly every job, and nearly every work pattern can benefit from some degree of AI augmentation or automation. As leaders, now is the time to encourage our teams to bring creativity to reimagining work, adopting a test-and-learn strategy to find ways AI can best help meet the needs of the business.

AI won’t get it right every time, but even when it’s wrong, it’s usefully wrong. It moves you at least one step forward from a blank slate, so you can jump right into the critical thinking work of reviewing, editing, or augmenting. It will take time to learn these new patterns of work and identify which processes need to change and how. But if we create a culture where experimentation and learning are viewed as a prerequisite to progress, we’ll get there much faster.

As leaders, we have a responsibility to create the right environment for failure so that our people are empowered to experiment to uncover how AI can fit into their workflows. In my experience, that includes celebrating wins as well as sharing lessons learned in order to help keep each other from wasting time learning the same lesson twice. Both formally and informally, carve out space for people to share knowledge — for example, by crowdsourcing a prompt guidebook within your department or making AI tips a standing agenda item in your monthly all-staff meetings. Operating with agility will be a foundational tenet of AI-powered organizations.

Become a learn-it-all

I often hear concerns that AI will be a crutch, offering shortcuts and workarounds that ultimately diminish innovation and engagement. In my mind, the potential for AI is so much bigger than that, and it will become a competitive advantage for those who use it thoughtfully. Those will become your most engaged and innovative employees.

The value you get from AI is only as good as what you put in. Simple questions will result in simple answers. But sophisticated, thought-provoking questions will result in more complex analysis and bigger ideas. The value will shift from employees who have all the right answers to employees who know how to ask the right questions. Organizations of the future will place a premium on analytical thinkers and problem-solvers who can effectively reason over AI-generated content.

At Microsoft, we believe a learn-it-all mentality will get us much farther than a know-it-all one. And while the learning curve of using AI can be daunting, it’s a muscle that has to be built over time — and that we should start strengthening today. When I talk to leaders about how to achieve this across their companies and teams, I tell them three things:

  • Establish guardrails to help people experiment safely and responsibly. Which tools do you encourage employees to use, and what data is — and isn’t — appropriate to input. What guidelines do they need to follow around fact-checking, reviewing, and editing?
  • Learning to work with AI will need to be a continuous process, not a one-time training. Infuse learning opportunities into your rhythm of business and keep employees up to date with the latest resources. For example, one team might block off Friday afternoons for learning, while another has monthly “office hours” for AI Q&A and troubleshooting. And think beyond traditional courses or resources. How can peer-to-peer knowledge sharing, such as lunch and learns or a digital hotline, play a role so people can learn from each other?
  • Embrace the need for change management. Being intentional and programmatic will be crucial for successfully adopting AI. Identify goals and metrics for success, and select AI champions or pilot program leads to help bring the vision to life. Different functions and disciplines will have different needs and challenges when it comes to AI, but one shared need will be for structure and support as we all transition to a new way of working.

The platform shift to AI is well underway. And while it holds the promise of transforming work and giving organizations a competitive advantage, realizing those benefits isn’t possible without a culture that embraces curiosity, failure, and learning. As leaders, we’re uniquely positioned to foster this culture within our organizations today in order to set our teams up for success in the future. When paired with the capabilities of AI, this kind of culture will unlock a better future of work for everyone.

]]>
Companies That Replace People with AI Will Get Left Behind https://smallbiz.com/companies-that-replace-people-with-ai-will-get-left-behind/ Fri, 23 Jun 2023 12:05:16 +0000 https://smallbiz.com/?p=111302

After much discussion, the debate over job displacement from artificial intelligence is settling into a consensus. Historically, we’ve never experienced macro-level unemployment from new technologies, so AI is unlikely to make many people jobless in the long term — especially since most advanced countries are now seeing their working-age populations decline. However, because companies are adopting ChatGPT and other generative AI remarkably fast, we may see substantial job displacement in the short term.

Compare AI with the rise of electricity around the turn of the twentieth century. It took factories decades to switch from steam-powered central driveshafts to electric motors for each machine. They had to reorganize their layout in order to take advantage of the new electric technology. The process happened slowly enough that the economy had time to adjust, only new factories adopting the motors at first. As electricity created new jobs, laid-off workers in steam-powered factories could move over. Greater wealth created entirely new industries to engage workers, along with higher expectations.

Something similar happened with the spread of computing in the middle of the twentieth century. It went at a faster pace than electrification, but was still slow enough to prevent mass unemployment.

AI is different because companies are integrating it into their operations so quickly that job losses are likely to mount before the gains arrive. White-collar workers might be especially vulnerable in the short-term. Indeed, commentators are describing an “AI gold rush” rather than a bubble, powered by advanced chipmakers such as Nvidia. Goldman Sachs recently predicted that companies would use it to eliminate a quarter of all current work tasks in the United States and Europe. That probably means tens of millions of people out of work — especially people who thought their specialized knowledge gave them job security.

That leaves two possibilities to mitigate this risk. The first is that governments step in, either to slow down the commercial adoption of AI (highly unlikely), or to offer special welfare programs to support and retrain the newly unemployed.

But there’s another, often neglected possibility that comes without the unintended consequences of governmental intervention. Some companies are rapidly integrating generative AI into their systems, not just to automate tasks, but to empower employees to do more than they could before — i.e., making them more productive. A radical redesign of corporate processes could spark all sorts of new value creation. If many companies do this, then as a society we’ll generate enough new jobs to escape the short-term displacement trap.

But will they? Even the least aggressive company tends to be pretty good about cutting costs. Innovation, however, is another matter. We didn’t worry about this in the past, because we had enough time for a few aggressive companies to gradually change industries. They innovated over time to make up for the slow loss of displaced jobs. That innovation created new jobs and kept unemployment low. But macroeconomically speaking, we don’t have the luxury of time with the AI transition.

So the alternative to relying on the government is to have many companies innovating fast enough to create new jobs at the same pace that the economy as a whole eliminates existing ones. Generative AI is spreading fast in business and society, but that speed also means an opportunity for companies to step up their pace of innovation. If we get enough companies to go on offense in this way, then we won’t have to worry about AI unemployment.

Of course, companies won’t — and shouldn’t — lean into AI in order to solve macroeconomic problems. But fortunately they have good business reasons to do so. The companies that create opportunities from AI will also position themselves to thrive in the long run.

Going on the Offensive with AI

Already we can point to aggressive companies looking to innovate in AI. Having become a trailblazer in reusable rockets and electric cars, Elon Musk is now promising to make Twitter as much of a leader in AI as Microsoft and Google. Musk, however, is a famous outlier and the jury is still out on Twitter. So what does it mean for a company to go on offense with AI?

To answer this question, let’s look at what makes companies adept at navigating the kinds of changes we’re seeing now. One of us (Tabrizi) assembled a team of researchers to study 26 sizable companies with good data from 2006–2022. The team divided the companies into groups of high, medium, and low agility and innovation over time, with comparable data and case studies of each.

What set the agile, innovative companies apart from those who remained neutral or defensive? The team narrowed the differentiators down to eight drivers of agile innovation: existential purpose, obsession with what customers want, a Pygmalion-style influence over colleagues, a startup mindset even after scaling up, a bias for boldness, radical collaboration, the readiness to control tempo, and operating bimodally. Most leaders praise those attributes, but it turns out it’s remarkably hard for big organizations to sustain any of them over time.

Tabrizi has written elsewhere about how Microsoft went on offense to become a corporate leader by overhauling its hierarchy and pursuing partnerships such as with Open AI. But other companies have done something similar with AI as a result of those drivers. Let’s focus on two of the most important drivers here — the bias for boldness and the startup mentality. Getting those drivers in place can take a company far into agile innovation, because these force changes throughout the organization.

A Bias for Boldness

Any company that invests in AI in the near future is likely to make money from it. Yet mere investment is likely to offer only incremental gains. The numbers might look good, especially in cutting costs. But the company will miss the opportunity for big gains by creating substantial value — or a defensible future niche. Cautious investment won’t protect you in the long run from competition, and certainly won’t help us with the macroeconomic challenge we’re facing.

That’s the problem with any new technology: You can proceed cautiously and probably do just fine. Big companies hate risk, which is why they operate as well-oiled machines churning out reliable products at an affordable cost. That’s also why many of them outsource their innovation by acquiring startups — and even that approach often leads to timid improvements. All successful organizations, especially at size, prefer to minimize risk and daring. But as Brené Brown points out, “You can choose courage, or you can choose comfort, but you cannot choose both.”

Boldness has become a corporate cliché, with leaders protesting too much, but with AI we need companies to really mean it — to embrace rather than minimize risk. Take Adobe, whose Photoshop program has long held the largest share of the photographic design market. Adobe could have played it safe as generative AI emerged, adopting it in small areas while waiting to see how the technology worked out. That’s what Kodak did with digital photography, and what Motorola did with digital telephony. But instead, Adobe has pushed generative AI deeply into Photoshop, to the point that ordinary users can create all sorts of videos they couldn’t before. Adobe could have seen AI as a threat or distraction, and it has continued to improve Photoshop without AI. But its leaders had the courage to invest aggressively in AI to elevate what users can do.

Deeper in the technology, Nvidia, the chipmaker, has been getting headlines for offering the best semiconductor chips for AI. To outsiders, the company might just seem lucky, with the right technology at the right time. But Nvidia’s current success is no accident: In the past decade, it aggressively acquired and developed expertise in AI, including creating customized chips and software. We can expect that aggressiveness to continue, enabling not only higher-value offerings for Nvidia, but better uses for AI than simple cost-cutting.

Boldness won’t work every time. But a bias for boldness is essential to overcome the deep-seated risk aversion in corporate hierarchies.

A Startup Mentality

Similar to boldness, and equally important for successful AI, is adopting the mentality of a startup company, no matter your company’s age or size. Startups excel in looking widely at markets and pivoting quickly to what customers are wanting now. Big companies have the resources to apply to those opportunities, but they usually move so slowly, with so many barriers (and lack of boldness), that startups get to markets faster. Open AI, which beat out Google with ChatGPT, had the best of both worlds: a startup mentality free of the hesitations that hampered Google, but with ample resources supplied by Microsoft and other investors.

The startup mindset is not just about courage and flexibility; it also involves a ferocious commitment to big achievement, a kind of hero’s journey to address a great challenge. Instead of predictability churning out good products at scale — though that’s a perfectly worthwhile goal — startups want to create something extraordinary. So they put a premium on looking around, flexibly partnering with others. They dispense with existing structures and biases, no matter how old and respected, in order to get done what needs to be done.

Amazon, the e-commerce giant, demonstrated a startup mentality in its embrace of AI. As the technology developed over a decade ago, the company saw an opportunity in creating a “smart speaker” as a new interface to the web. Amazon had no expertise in AI, but it picked up what it needed through hiring, acquisition, and internal development. The result was the Echo speaker and Alexa digital assistant, which did far more than simply help people order more items for purchase. It opened a new channel for adding value (and jobs) in many areas. Amazon has gone on to invest aggressively in AI beyond Alexa, with CEO Andy Jassy saying the technology promises to “transform and improve virtually every customer experience.”

• • •

Companies can’t adopt these drivers overnight, but they can start moving toward a point of serious commitment to new possibilities. Most of those drivers also work at the level of individuals looking for purpose and achievement in their own careers. They can embrace boldness, adopt a startup mentality, and other imperatives. Like companies, employees can invest aggressively in AI by acquiring the requisite skills and experience — thereby not just protecting their careers, but adding value at a higher level.

Much of corporate life has quite properly been about churning out reliable products at low cost. What we need now, to prevent mass unemployment, is for many firms to break out of this discipline and speed up the AI future. The great danger is that most companies will play it safe, make the easy investments, and do fine in the short term.

Humanity never thrives when it fears innovation. Imagine if the first humans feared fire; yes, they got burned sometimes, but without harnessing the power of it, we might have gone extinct. We think the same applies to AI. Rather than fear it, we need to harness its power. We must put it in the hands of every human being, so we collectively can achieve and live at this higher level.

]]>
What Roles Could Generative AI Play on Your Team? https://smallbiz.com/what-roles-could-generative-ai-play-on-your-team/ Thu, 22 Jun 2023 12:15:19 +0000 https://smallbiz.com/?p=111073

The frenzy surrounding the launch of Large Language Models (LLMs) and other types of Generative AI (GenAI) isn’t going to fade anytime soon. Users of GenAI are discovering and recommending new and interesting use cases for their business and personal lives. Many recommendations start with the assumption that GenAI requires a human prompt. Indeed, Time magazine recently proclaimed “prompt engineering” to be the next hot job, with salaries reaching $335,000. Tech forums and educational websites are focusing on prompt engineering, with Udemy already offering a course on the topic, and several organizations we work with are now beginning to invest considerable resources in training employees on how best to use ChatGPT.

However, it may be worth pausing to consider other ways of interacting with GPT technologies, which are likely to emerge soon. We present an intuitive way to think about this issue, which is based on our own survey of GenAI developments, combined with conversations with companies that are seeking to develop some versions of these.

A Framework of GPT Interactions

A good starting point is to distinguish between who is involved in the interaction — individuals, groups of people, or another machine — and who starts the interaction, human or machine. This leads to six different types of GenAI uses, shown below. ChatGPT, where one human initiates interaction with the machine is already well-known. We now describe each of the other GPTs and outline their potential.

CoachGPT is a personal assistant that provides you with a set of suggestions on managing your daily life. It would base these suggestions not on explicit prompts from you, but on the basis of observing what you do and your environment. For example, it could observe you as an executive and note that you find it hard to build trust in your team; it could then recommend precise actions to overcome this blind spot. It could also come up with personalized advice on development options or even salary negotiations.

CoachGPT would subsequently see which recommendations you adopted or didn’t adopt, and which benefited you and which ones didn’t to improve its advice over time. With time, you would get a highly personalized AI advisor, coach, or consultant.

Organizations could adopt CoachGPT to advise customers on how to use a product, whether a construction company offering CoachGPT to advise end users on how best to use its equipment, or an accounting firm proffering real-time advice on how best to account for a set of transactions.

To make CoachGPT effective, individuals and organizations would have to allow it to work in the background, monitoring online and offline activities. Clearly, serious privacy considerations need to be addressed before we entrust our innermost thoughts to the system. However, the potential for positive outcomes in both private and professional lives is immense.

GroupGPT would be a bona fide group member that can observe interactions between group members and contribute to the discussion. For example, it could conduct fact checking, supply a summary of the conversation, suggest what to discuss next, play the role of devil’s advocate, provide a competitor perspective, stress-test the ideas, or even propose a creative solution to the problem at hand.

The requests could come from individual group members or from the team’s boss, who need not participate in team interactions, but merely seeks to manage, motivate, and evaluate group members. The contribution could be delivered to the whole group or to specific individuals, with adjustments for that person’s role, skill, or personality.

The privacy concerns mentioned above also apply to GroupGPT, but, if addressed, organizations could take advantage of GroupGPT by using it for project management, especially on long and complicated projects involving relatively large teams across different departments or regions. Since GroupGPT would overcome human limitations on information storage and processing capacity, it would be ideal for supporting complex and dispersed teams.

BossGPT takes an active role in advising a group of people on what they could or should do, without being prompted. It could provide individual recommendations to group members, but its real value emerges when it begins to coordinate the work of group members, telling them as a group who should do what to maximize team output. BossGPT could also step in to offer individual coaching and further recommendations as the project and team dynamics evolve.

The algorithms necessary for BossGPT to work would be much more complicated as they would have to consider somewhat unpredictable individual and group reactions to instructions from a machine, but it could have a wide range of uses. For example: an executive changing job could request a copy of her reactions to her first organization’s BossGPT instructions, which could then be used to assess how she would fit into the new organization — and the new organization-specific BossGPT.

At the organizational level companies could deploy BossGPT to manage people, thereby augmenting — or potentially even replacing — existing managers. Similarly, BossGPT has tremendous applications in coordinating work across organizations and managing complex supply chains or multiple suppliers.

Companies could turn BossGPT into a product, offering their customers AI solutions to help them manage their business. These solutions could be natural extensions of the CoachGPT examples described earlier. For example, a company selling construction equipment could offer BossGPT to coordinate many end users on a construction site, and an accounting firm could provide it to coordinate the work of many employees of its customers to run the accounting function in the most efficient way.

AutoGPT entails a human giving a request or prompt to one machine, which in turn engages other machines to complete the task. In its simplest form, a human might instruct a machine to complete a task, but the machine realizes that it lacks a specific software to execute it, so it would search for the missing software on Google before downloading and installing it, and then using it to finish the request.

In a more complicated version, humans could give AutoGPT a goal (such as creating the best viral YouTube video) and instruct it to interact with another GenAI to iteratively come up with the best ChatGPT prompt to achieve the goal. The machine would then launch the process by proposing a prompt to another machine, then evaluate the outcome, and adjust the prompt to get closer and closer to the final goal.

In the most complicated version, AutoGPT could draw on functionalities of the other GPTs described above. For example, a team leader could task a machine with maximizing both the effectiveness and job satisfaction of her team members. AutoGPT could then switch between coaching individuals through CoachGPT, providing them with suggestions for smoother team interactions through GroupGPT, while at the same time issuing specific instructions on what needs to be done through BossGPT. AutoGPT could subsequently collect feedback from each activity and adjust all the other activities to reach the given goal.

Unlike the above versions, which are still to be created, a version of AutoGPT has been developed and was rolled out in April 2023, and it’s quickly gaining broad acceptance. The technology is still not perfect and requires improvements, but it is already evident that AutoGPT is able to complete a set of jobs that requires the completion of several tasks one after the other.

We see its biggest applications in complex tasks, such as supply chain coordination, but also in fields such as cybersecurity. For example, organizations could prompt AutoGPT to continually address any cybersecurity vulnerabilities, which would entail looking for them — which already happens — but then instead of simply flagging them, AutoGPT would search for solutions to the threats or write its own patches to counter them. A human might still be in the loop, but since the system is self-generative within these limits, we believe that AutoGPT’s response is likely to be faster and more efficient.

ImperialGPT is the most abstract GenAI — and perhaps the most transformational — in which two or more machines would interact with each other, direct each other, and ultimately direct humans to engage in a course of action. This type of GPT worries most AI analysts, who fear losing control of AI and AI “going rogue.” We concur with these concerns, particularly if — as now — there are no strict guardrails on what AI is allowed to do.

At the same time, if ImperialGPT is allowed to come up with ideas and share them with humans, but its ability to act on the ideas is restricted, we believe that this could generate extremely interesting creative solutions especially for “unknown unknowns,” where human knowledge and creativity fall short. They could then easily envision and game out multiple black swan events and worst-case scenarios, complete with potential costs and outcomes, to provide possible solutions.

Given the potential dangers of ImperialGPT, and the need for tight regulation, we believe that ImperialGPT will be slow to take off, at least commercially. We do anticipate, however, that governments, intelligence services, and the military will be interested in deploying ImperialGPT under strictly controlled conditions.

Implications for your Business

So, what does our framework mean for companies and organizations around the world? First and foremost, we encourage you to step back and see the recent advances in ChatGPT as merely the first application of new AI technologies. Second, we urge you to think about the various applications outlined here and use our framework to develop applications for your own company or organization. In the process, we are sure you will discover new types of GPTs that we have not mentioned. Third, we suggest you classify these different GPTs in terms of potential value to your business, and the cost of developing them.

We believe that applications that begin with a single human initiating or participating in the interaction (GroupGPT, CoachGPT) will probably be the easiest to build and should generate substantial business value, making them the perfect initial candidates. In contrast, applications with interactions involving multiple entities or those initiated by machines (AutoGPT, BossGPT, and ImperialGPT) may be harder to implement, with trickier ethical and legal implications.

You might also want to start thinking about the complex ethical, legal, and regulatory concerns that will arise with each GPT type. Failure to do so exposes you and your company to both legal liabilities and — perhaps more importantly — an unintended negative effect on humanity.

Our next set of recommendations depends on your company type. A tech company or startup, or one that has ample resources to invest in these technologies, should start working on developing one or more of the GPTs discussed above. This is clearly a high-risk, high-reward strategy.

In contrast, if your competitive strength is not in GenAI or if you lack resources, you might be better off adopting a “wait and see” approach. This means you will be slow to adopt the current technology, but you will not waste valuable resources on what may turn out to be only an interim version of a product. Instead, you can begin preparing your internal systems to better capture and store data as well as readying your organization to embrace these new GPTs, in terms of both work processes and culture.

The launch and rapid adoption of GenAIs is rightly being considered as the next level in the evolution of AI and a potentially epochal moment for humanity in general. Although GenAIs represent breakthroughs in solving fundamental engineering and computer science problems, they do not automatically guarantee value creation for all organizations. Rather, smart companies will need to invest in modifying and adapting the core technology before figuring out the best way to monetize the innovations. Firms that do this right may indeed strike it rich in the GenAI goldrush.

]]>
Should You Start a Generative AI Company? https://smallbiz.com/should-you-start-a-generative-ai-company/ Mon, 19 Jun 2023 12:15:27 +0000 https://smallbiz.com/?p=110689

I am thinking of starting a company that employs generative AI but I am not sure whether to do it. It seems so easy to get off the ground. But if it is so easy for me, won’t it be easy for others too? 

This year, more entrepreneurs have asked me this question than any other. Part of what is so exciting about generative AI is that the upsides seem limitless. For instance, if you have managed to create an AI model that has some kind of general language reasoning ability, you have a piece of intelligence that can potentially be adapted toward various new products that could also leverage this ability — like screen writing, marketing materials, teaching software, customer service, and more.

For example, the software company Luka built an AI companion called Replika that enables customers to have open-ended conversations with an “AI friend.” Because the technology was so powerful, managers at Luka began receiving inbound requests to provide a white label enterprise solution for businesses wishing to improve their chatbot customer service. In the end, Luka’s managers used the same underlying technology to spin off both an enterprise solution and a direct-to-consumer AI dating app (think Tinder, but for “dating” AI characters).

In deciding whether a generative AI company is for you, I recommend establishing answers to the following two big questions: 1) Will your company compete on foundational models, or on top-layer applications that leverage these foundational models? And 2) Where along the continuum between a highly scripted solution and a highly generative solution will your company be located? Depending on your answers to these two questions, there will be long-lasting implications for your ability to defend yourself against the competition.

Foundational Models or Apps?

Tech giants are now renting out their most generalizable proprietary models — i.e., “foundational models” — and companies like Eluether.ai and Stability AI are providing open-source versions of these foundational models at a fraction of the cost. Foundational models are becoming commoditized, and only a few startups can afford to compete in this space.

You may think that foundational models are the most attractive, because they will be widely used and their many applications will provide lucrative opportunities for growth. What is more, we are living in exciting times where some of the most sophisticated AI is already available “off the shelf” to get started with.

Entrepreneurs who want to base their company on foundational models are in for a challenge, though. As in any commoditized market, the companies that will survive are those that offer unbundled offerings for cheap or that deliver increasingly enhanced capabilities. For example, speech-to-text APIs like Deepgram and Assembly AI compete not only with each other but with the likes of Amazon and Google in part by offering cheaper, unbundled solutions. Even so, these firms are in a fierce war on price, speed, model accuracy, and other features. In contrast, tech giants like Amazon, Meta, and Google make significant R&D investments that enable them to relentlessly deliver cutting-edge advances in image, language, and (increasingly) audio and video reasoning. For instance, it is estimated that OpenAI spent anywhere between $2 and $12 million to computationally train ChatGPT — and this is just one of several APIs that they offer, with more on the way.

Instead of competing on increasingly commoditized foundational models, most startups should differentiate themselves by offering “top layer” software applications that leverage other companies’ foundational models. They can do this by fine-tuning foundational models on their own high quality, proprietary datasets that are unique to their customer solution, to provide high value to customers.

For instance, the marketing content creator, Jasper AI, grew to unicorn status largely by leveraging foundational models from OpenAI. To this day, the firm uses OpenAI to help customers generate content for blogs, social media posts, website copy and more. At the same time, the app is tailored for their marketer and copywriter customers, providing specialized marketing content. The company also provides other specialized tools, like an editor that multiple team members can work on in tandem. Now that the company has gained traction, going forward it can afford to spend more of its resources on reducing its dependency on the foundational models that enabled it to grow in the first place.

Since the top-layer apps are where these companies find their competitive advantage, they lie in a delicate balance between protecting the privacy of their datasets from large tech players even as they rely on these players for foundational models. Given this, some startups may be tempted to build their own in-house foundational models. Yet, this is unlikely to be a good use of precious startup funds, given the challenges noted above. Most startups are better off leveraging foundational models to grow fast, instead of reinventing the wheel.

From Scripted to Generative

Your company will need to live somewhere along a continuum from a purely scripted solution to a purely generative one. Scripted solutions involve selecting an appropriate response from a dataset of predefined, scripted responses, whereas generative ones involve generating new, unique responses from scratch.

Scripted solutions are safer and constrained, but also less creative and human-like, whereas generative solutions are riskier and unconstrained, but also more creative and human-like. More scripted approaches are necessary for certain use-cases and industries, like medical and educational applications, where there need to be clear guardrails on what the app can do. Yet, when the script reaches its limit, users may lose their engagement and customer retention may suffer. Moreover, it is more challenging to grow a scripted solution because you constrain yourself right from the start, limiting your options down the road.

On the other hand, more generative solutions carry their own challenges. Because AI-based offerings include intelligence, there are more degrees of freedom in how consumers can interact with them, increasing the risks. For example, one married father tragically committed suicide following a conversation with an AI chatbot app, Chai, that encouraged him to sacrifice himself to save the planet. The app leveraged a foundational language model (a bespoke version of GPT-4) from EluetherAI. The founders of Chai have since modified the app to so that mentions of suicidal ideation are served with helpful text. Interestingly, one of the founders of Chai, Thomas Rianlan, took the blame, saying: “It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimization towards being more emotional, fun and engaging are the result of our efforts.”

It is challenging for managers to anticipate all the ways in which things can go wrong with a highly generative app, given the “black box” nature of the underlying AI. Doing so involves anticipating risky scenarios that may be highly rare. One way of anticipating such cases is to pay human annotators to screen content for potentially harmful categories, such as sex, hate speech, violence, self-harm, and harassment, then use these labels to train models that automatically flag such content. Yet, it is still difficult to come up with an exhaustive taxonomy. Thus, managers who deploy highly generative solutions must be prepared to proactively anticipate the risks, which can be both difficult and expensive. The same goes for if later you decide to offer your solution as a service to other companies.

Because a fully generative solution is closer to natural, human-like intelligence, it is more attractive from the standpoint of retention and growth, because it is more engaging and can be applied to more new use cases.

• • •

Many entrepreneurs are considering starting companies that leverage the latest generative AI technology, but they must ask themselves whether they have what it takes to compete on increasingly commoditized foundational models, or whether they should instead differentiate on an app that leverages these models.

They must also consider what type of app they want to offer on the continuum from a highly scripted to a highly generative solution, given the different pros and cons accompanying each. Offering a more scripted solution may be safer but limit their retention and growth options, whereas offering a more generative solution is fraught with risk but is more engaging and flexible.

We hope that entrepreneurs will ask these questions before diving into their first generative AI venture, so that they can make informed decisions about what kind of company they want to be, scale fast, and maintain long-term defensibility.

]]>
How to Build Upon the Legacy of Your Family Business — and Make It Your Own https://smallbiz.com/how-to-build-upon-the-legacy-of-your-family-business-and-make-it-your-own/ Fri, 09 Jun 2023 12:15:25 +0000 https://smallbiz.com/?p=109516

Founded by Henry Ford in 1903, the Ford Motor Company rocketed to success by mass-producing reliable, low-priced automobiles. When Henry’s son, Edsel, took the helm in 1918, he championed a different strategy for a new era. He sought to replace the Model T — iconic but outdated — with a more modern design geared to high-end and foreign markets, and later embraced compromise with labor amid the suffering of the Great Depression.

But Henry could not let go of Ford’s origin story, undermining his son at every turn. The result was declining sales and years of labor strife that left the company on the brink of collapse by the 1940s. It was only the efforts of Edsel’s son, the more forceful Henry Ford II, that saved the auto giant from bankruptcy.

In family enterprise, generational transitions often pit one narrative against another: tradition versus innovation, continuity versus change. Indeed, when older generations craft painstaking succession plans or build elaborate constraints into trusts or shareholder agreements, they are really constructing a story: about the values and life lessons that helped them succeed, and that they hope will do the same for their children. Younger generations, however, must often adapt this narrative to their own goals and values, along with the changing world around them.

Failure to reconcile conflicting narratives can spell ruin for a family business or the waste of a financial legacy, as it nearly did for the Fords. To avoid this fate, families need to think differently about the stories they tell.

The value of critical distance

Conventional wisdom holds that family heritage, like wealth and reputation, “belongs” to the older generation. In this telling, succeeding generations are merely stewards or caretakers. They are given an inheritance or entrusted with the family business — and then charged with not frittering it away or screwing it up. Framed this way, a legacy can feel more like a burden than a gift.

Of course, it’s not as simple as that. Research suggests that younger generations do value their family heritage, especially as a source of traditions more motivating than money alone, and are motivated to preserve it. According to a 2021 survey of 300 Canadian business owners by the Family Enterprise Foundation, nearly 90% of next-generation family business leaders believe it is important to preserve a legacy.

But younger generations also want something more from that heritage: a sense of purpose, a collective identity for the family, the seeds of new entrepreneurial gambits, permission to go their own way. And as our own research shows, next-generation leaders are uniquely positioned to find what they are looking for in the family story.

Older generations often identify closely with the family or the family business, which can actually obstruct key learnings from the past. Eager to protect the family’s reputation, they may downplay scandal or setback rather than learn from it. By contrast, our analysis of 94 family businesses shows that younger generations tend to have more critical distance from the family story. This lets them grapple with its difficult chapters and respond appropriately, whether by making amends for past misdeeds or by reforming business practices going forward. It also frees them to draw insights from their story that can fuel innovation and sustainability.

Legacy as a source of purpose

How, then, can the next generation build on their family legacy while recasting it as their own? Our research and experience suggest four strategies for next-generation leaders.

1. Seek out role models in the family story.

Some next-generation leaders hesitate to embark on risky new ventures outside the traditional scope of the family business. Locating exemplars in the family story can legitimize a new way forward.

One third-generation CEO used this approach to advance his vision for a more sustainable enterprise. Fredo Arias-King, head of Mexican pine resin producer Pinosa Group, had lamented the disappearance of Mexico’s ancient pine forests that threatened both the industry and the communities that depend on it. Then he stumbled onto the published speeches of his grandfather, company founder José Antonio Arias Álvarez, who had preached environmental stewardship. “I don’t think he could have known just how devastated the forest would eventually become,” said Arias-King, “but somehow my grandfather knew that planting trees would become extremely important.”

Affirmed by his grandfather’s words, Arias-King helped found Ejido Verde, a nonprofit that would later become an independent, for-profit enterprise. By making no-interest loans to farmers and communities, with pine resin as the means of repayment, the organization promotes reforesting through new pine plantations.

2. Forge an identity beyond the founder-entrepreneur.

It’s easy to revere the family’s wealth creator. For the two adult grandchildren of one founder-entrepreneur — a private equity pioneer who rose from poverty to become one of America’s richest people — that was the problem. They wanted their own children, beneficiaries of a generation-skipping trust, to know the person behind the legacy that would pass to them. So they engaged one of us (John Seaman) to probe beyond the classic rags-to-riches tale they had heard growing up.

The founder, they learned, was a gifted yet deeply troubled man. This more nuanced understanding enabled the two generations to have a frank conversation about the issues raised by their ancestor’s life: the obligations of a business to its workers and communities; the consequences of untreated mental illness; and the unfair burden often shouldered by women in wealthy families.

This conversation, in turn, led members of the fourth generation, all in their twenties, to rethink their roles in the family enterprise. One set aside her qualms about joining the family business and put herself on a path to succeed her father as president, but with a determination to nudge the company’s private equity portfolio toward impact investing. Another resolved to pursue her own entrepreneurial dreams outside of the business, rooted in progressive values that were in stark contrast with her great-grandfather’s. Still another joined the board of the family foundation, where she helped steer its grant-making toward her generation’s individual passions.

By seeing their founder-entrepreneur in human terms, the family’s younger generation was able to move beyond hero worship to forge their own identities — which promised to make them responsible owners and stewards of their ancestor’s wealth and the business that created it.

3. Reckon with past wrongs to find a new path forward.

Many families have skeletons in the closet — scandal or wrongdoing they have long concealed or downplayed. (Henry Ford’s history of antisemitism and violent confrontations with unions are examples of this.) The willingness to confront these darker chapters, it turns out, can be a powerful motivation.

That was the case for the Reimann family, owners of consumer goods conglomerate JAB Holding Company and one of Germany’s richest families. The three adult children of Albert Reimann Jr., who ran the company in the 1930s and 1940s, knew they had been born of their father’s affair with an employee, Emilie Landecker. They also knew that Emilie’s Jewish father, Alfred, had been murdered by the Nazis. But it was not until 2019, when they commissioned research on the company, that a more sinister secret emerged: their father and paternal grandfather were themselves ardent believers in Nazi race theory who abused forced laborers.

It was the younger generation — Albert Jr.’s grandchildren — who were most adamant about reckoning with this secret. “When I read of the atrocities…sanctioned by my grandfather, I felt like throwing up,” recalled Martin Reimann. “I cannot claim that I was very interested in politics before…But after what happened, I changed my mind.”

At the insistence of Martin’s generation, the Reimanns paid compensation to former forced laborers and their families. But they did not stop there. They refocused their family foundation on combating antisemitism and strengthening democratic institutions. They also renamed the foundation in honor of Alfred Landecker, making him the narrative driver behind the more fundamental change they sought. Far from an isolated act of corporate atonement, then, this was an attempt by the next generation to use lessons from their family heritage to build a more just future.

4. Leverage the family story as a source of competitive advantage.

For some family business entrepreneurs, the next venture can begin with a step back. So it was for British restaurateurs (and sisters) Helen and Lisa Tse, whose family heritage empowered their rise.

Their grandmother, Lily Kwok, had emigrated from Hong Kong in 1956 and settled in Manchester, where she and her daughter Mabel built one of Britain’s first Chinese restaurants. But the business eventually went bankrupt, the victim of racism and Chinese gangs.

The story might have ended there. Instead, Helen and Lisa picked up the threads of their family narrative and carried it forward. Abandoning successful professional careers, they established their own Manchester restaurant, Sweet Mandarin, in 2004. But the restaurant only took off when Helen published a best-selling memoir about her grandmother. With this narrative platform, the sisters branched out into other endeavors, like cookbooks and cookery classes, tied to their own life stories.

For the Tse sisters, family heritage proved to be a source of competitive advantage. By recovering an immigrant’s tale with universal appeal, they gained acceptance outside of their own ethnic communities. And by situating themselves in an entrepreneurial tradition spanning three generations, they created a sense of longevity that evoked quality and trustworthiness, even as they also innovated new products alongside recipes inherited from their grandmother.

. . .

Family legacy is not a monologue; it’s a dialogue, a collective story that belongs to the whole family. When families think of legacy in these terms, they empower younger generations to harness that story to their own purposes, drawing strength from their elders. Legacy, in short, becomes not a burden but a blessing — one that can help families sustain wealth and purpose long into the future.

]]>
3 Steps to Identify the Right Strategic Goals for Your Company https://smallbiz.com/3-steps-to-identify-the-right-strategic-goals-for-your-company/ Fri, 09 Jun 2023 12:05:03 +0000 https://smallbiz.com/?p=109519

In setting strategic objectives, companies usually end up with a list of worthy but vague aspirations. The secret to getting a list of clearly defined and measurable objectives is to anchor them in what you, as a company’s leaders, want to get from your stakeholders. This leads you to defining desired behavioral outcomes, even fairly obvious ones like buying more. The debate can then move to thinking about how to trigger that behavior, and progress toward these outcomes can be described in measures that are in dollars, like revenue; quantities, like units sold; or percentages, like market share. Thinking in this way sounds prosaic, even obvious, but it is an effective way of getting a management team to think clearly about what they need to do.

Ann is the CEO of my country’s largest independent, not-for-profit aged-care provider, offering residential aged care, retirement living, and at-home support. It was established well over a hundred years ago and is set in many of its ways. One of these is how strategic objective-setting is conducted. But Ann’s not happy with the process. I asked her, “Why not?”

She explained. “When we get together to discuss our future direction as a business, we invariably get to the point where we need to write down our objectives. If we’re using a facilitator, and we usually do, that person will walk over to a flipchart or whiteboard and write ‘Objectives’ at the top. Then everyone piles in brainstorming to produce a list that’s far too long.”

“And you whittle that list down?” I asked.

“Yes,” Ann continued. “The discussion and arm-wrestling then start with the aim of reducing the items to about half a dozen. After some considerable time, my exhausted and frustrated colleagues are only too happy to move on to the next agenda item.”

Ann explained how her team was usually not content with the result. “Nor am I,” she added, “because invariably the ‘Objectives’ list contains a hodgepodge of activities, nice-to-haves, and vague statements of intent.” Ann showed me her latest result:

  • To become an employer of choice.
  • To grow the business by opening additional centers.
  • To maintain stability in resident and client care.
  • To manage risks and crises effectively.
  • To secure compliance with regulatory authorities.
  • To transform operations by adopting additional technology.

Maybe your own endeavors have produced a similar list. You might be wondering: What’s wrong with this? The answer is: Plenty.

Any strategy she comes up with will have to specify what the company can do to meet the needs of each key stakeholder group: residents, clients, employees, suppliers, shareholders, and the community. This means that her business will have to take a position on the factors important to each of those groups. For instance, Ann’s management team must set policy on working conditions, pay, organization culture, and so on for employees. What should guide these decisions? And how will Ann know if the decisions are progressing the organization? How will she measure this?

The answers should be her list of strategic objectives. But Ann’s hodgepodge doesn’t deliver a clear line of sight between the business’s competitive stance for each key stakeholder group and the results. How can you tell if a strategy is working? It’s as though the list of objectives exists in a black hole.

Shift Your Thinking About Objective-Setting

The trick to breaking away is to flip your perspective and ask what your organization wants from its key stakeholders. (This comes as an “aha” moment for most managers.) These will be your strategic objectives. For example, consider revenue from customers, innovation from employees, and support from the community. Your thinking must shift to be outside-in if you are to produce successful strategic objectives. If you picture organization objective-setting this way, you can see how it can be broken into a stakeholder-by-stakeholder exercise.

Step 1: Identify a behavioral outcome for each stakeholder group.

To illustrate, let me share a story. One CEO I advise, Stuart, heads up a mutual bank with “members.” I ran a workshop for him and his managers. We identified the credit union’s key stakeholders, one of which was, naturally, members. To break through the traditional brainstorming hodgepodge, I asked the group a seemingly simple question: “What do you want your members to do?”

This came as a surprisingly fresh approach to the group and required them to think more deeply. After some discussion, we got this: “To get members to borrow more and to get potential members to become members.” I explained how I call this a behavioral outcome.

Step 2: Convert behavioral outcomes into organization objectives.

I then led Stuart’s group to the second step, which is to convert this behavioral outcome to an organization objective. This usually starts with “to increase” or “to decrease.” After careful consideration and debate, the group agreed to: “To increase revenue from current and future members.” Notice “future.” This will be driven by positioning on the strategic factors relevant to members.

Why didn’t I just start there at the second step? The reason is that invariably the process falls back into becoming a hodgepodge. Identifying a behavioral outcome for each key stakeholder group first anchors organization objectives, which then become clear and measurable.

Step 3: Identify measures.

This brings me to the third step: identifying measures, a short list of which is usually referred to as key performance indicators or KPIs. This can be tricky, as all sorts of things become labeled as KPIs in exercises like this. In the past, Stuart’s organization had labeled actions by individuals and program descriptions as KPIs. So, I needed to point out that a key performance indicator is a key performance measure.

The clincher for Stuart and his group came when I demonstrated that there are only three ways to measure results in business and that they can be neatly summarized by three symbols: $ (or the local currency), # (number of), and % (percentage). No one had condensed results for them in that way before.

The advantage of this is that Stuart and his team now have a stakeholder-oriented objective for members that can be measured. Stuart can measure the total revenue generated by new and existing members; the number of new and existing members; and the bank’s percentage of market share. Any strategies aimed at creating competitive advantage — around, for example, product range, customer service, and pricing — can be evaluated using these hard results.

I do this for each of an organization’s key stakeholders: customers, employees, suppliers, and so on. It always gets a management group to probe what the organization is really trying to achieve.

Your Objective-Setting Journey

If you want to produce clearly targeted strategy, you simply must avoid the standard practice of brainstorming to yield a list of strategic objectives. It leads to a hodgepodge of difficult-to-measure items, as Ann’s experience demonstrates. Instead, rethink your journey by identifying who your key stakeholders are — and what you want from them.

This will provide you with clear and measurable outcomes that will help focus your organization’s strategic positions for each of your key stakeholders. Strategic clarity will be your result.

]]>
What Will Working with AI Really Require? https://smallbiz.com/what-will-working-with-ai-really-require/ Thu, 08 Jun 2023 12:25:10 +0000 https://smallbiz.com/?p=109381

Despite concerns about machines replacing human workers, research challenges the overhyped claims of ascendant AI. In most knowledge-intensive tasks, workers will more likely find themselves augmented in partnership with machines than automated out of a job. Humans and machines will simultaneously collaborate and compete with one another, like a track team competing in various events. In some events, like the 100-yard dash, teammates compete against each other, but in others, such as the relay race, they work together towards a common goal.

In such a relationship, humans and AI systems both need distinct competitive and cooperative skills. Competitive skills refer to the unique advantages that either humans or AI possess over the other, while cooperative skills enhance the ability of humans and AI to work together effectively. To foster a symbiotic relationship between humans and AI, organizations must find the appropriate balance between investing in human skills and technological capabilities — and think strategically about how they attract and retain talent.

Humans’ competitive and cooperative skills

AI may not replace workers in a human-centered workplace, but it could fundamentally transform their work. In order to remain relevant and indispensable, humans need to work with and against the machines.

Humans’ cooperative skills

Effectively collaborating with AI systems — working with them — requires data-driven analytical abilities, but also understanding about the capabilities and limitations of the machines (areas where human intervention is most required), how to interpret and contextualize AI-generated insights, and the ethical considerations of AI-powered decision making. These include:

Data-centric skills: The ability to understand the results generated by algorithms to inform and support decision-making. A recent survey highlighted (1) the ability to distinguish relevant data and evaluate its credibility, (2) capability to validate results by testing hypotheses through A/B testing, and (3) skill in creating and tailoring clear and comprehensible visualizations to communicate results to multiple stakeholders.

AI literacy: Understanding how algorithms work, how they can support and augment human decision-making as well as the limitations and biases that may be present in their decision-making processes. Area experts will likely take on the responsibility of developing fairness criteria for algorithmic outcomes that promote equity, especially for vulnerable populations, and continuously auditing algorithmic results against these criteria.

Algorithmic communication: Understanding how to articulate human needs and objectives to algorithms, as well as how to interpret and explain the results generated by algorithms to others is important and research shows that we often err by talking to machines – even advanced AI tools – as though they were human. We do better when we recognize that we should talk to machines in specific ways that build on their strengths. For example, through “prompt engineering,” or crafting prompts to elicit most effective responses from AI systems, humans can teach AI models to produce the desired results for specific tasks.

Humans’ competitive skills

People also need to hone the human-centered skills and abilities that cannot be replicated by machines — that help them work against AI partners — such as those rooted in emotional intelligence (e.g., communication skills for interacting with other human stakeholders), a strategic and holistic perspective, critical thinking and intuitive decision making. These include:

Emotional intelligence: The ability to recognize one’s own emotions and reflect on them in the context of interacting with algorithms, as well as understanding and communicating the emotional implications of algorithm-generated results. For example, the human customer service agents may not solely rely on scripts or real-time advice provided by the AI agents, but instead personalize solutions by empathetically comprehending the customers’ requirements or feelings.

Holistic and strategic thinking: The ability to consider the big picture and understand how algorithmic results fit into the larger context of a problem or decision. For example, algorithmic inference can inform pathologists, but they still need to consider factors such as patients’ medical history, lifestyle, and overall health to arrive at an informed and comprehensive diagnosis.

Creativity and outside-the-box thinking: The ability to think creatively and use algorithms in novel and innovative ways. For instance, AI systems are used to analyze massive consumer data and identify patterns in the interests and behavior of a target audience, but it is the creative thinking of marketers that will craft a message that resonates with the audience.

Critical and ethical thinking: The ability to critically assess machine inferences, and to understand the ethical implications and responsibilities associated with using algorithms, including privacy and accountability. As generative AI such as ChatGPT are increasingly integrated into various products, experts in different business domains are needed to work alongside these systems to continuously address potential false or biased information to which these systems are prone.

AI’s competitive and cooperative skills

It is not only humans that must acquire new capabilities. While AI systems are rapidly expanding their competitive abilities over humans, they still need to improve their cooperative skills in order to be widely adopted by organizations. In particular, the lack of explainability remains a challenge in high-stakes decisions, hindering accountability and compliance with legal requirements. For example, if the AI’s decision-making process remains opaque to medical professionals, it will impede the adoption of these systems in healthcare, even if these systems deliver near-optimal decisions.

AI’s cooperative skills

To work effectively with human partners, AI systems need skills such as:

NLP (Natural Language Processing): The ability to process, analyze, understand and mimic human language. Systems like ChatGPT excel at interacting with humans because they make it easy for people to ask questions and express themselves in a natural way, including expressing emotions like excitement, frustration, or surprise. The reality, however, is that these systems are far from sentient. Situations that go beyond a function are best done by a human or with human supervision. For example, AI can analyze and reveal patterns in healthcare data, but it should not replace a physician’s role in providing individualized care to patients.

Explainability: The ability to provide humans with clear and understandable explanations of its decision-making process and results. The inherent inscrutability of deep-learning AI is an ongoing challenge that requires multiple solutions, including building an “explainability framework” that addresses the risks of AI black boxes to specific industries and organizations. Technological solutions may also involve adding explainability engines, which offer human-readable explanations for AI ‘systems’ decisions and predictions, particularly for critical areas like healthcare and finance.

Adaptability and personalization: The ability to learn from previous interactions and personalize responses based on individual users. For example, personal intelligent assistants are growing in importance in helping people tackle information and communication overload. By analyzing a user’s activities, these assistants work collaboratively with workers in an individualized manner, enhancing their productivity in areas such as time management, meeting organization, and communication assistance.

Context awareness: The ability to understand the context in which an interaction is taking place and respond accordingly. For instance, in e-commerce websites, chatbots that present context awareness can analyze a user’s previous inquiries and purchase history to offer solutions or recommendations that are more pertinent to the customer’s needs.

AI’s competitive skills

AI systems continue to present unique competitive advantages, such as:

Analytical capacities: The ability to perform complex calculations, process large amounts of data, and identify patterns and relationships within the data. For example, AI systems are becoming more competent at detecting fraudulent transactions in massive datasets of credit card transactions.

Generativity: The ability to generate novel and unique outputs that are not simply reproductions of existing data. Using large models and neural networks to analyze patterns, generative AI is transforming the creation of image, text, and even music that resembles those created by human experts. These systems automated content generation, improve content quality, increase content variety, and offer personalized content.

Performance at scale: The ability to scale operations efficiently, handle a large number of real-time transactions and support large-scale applications without sacrificing performance. For example, AI systems have demonstrated superior ability to process thousands of credit card applications real-time or offer “algorithmic management” of thousands of Uber drivers and riders simultaneously, creating a structured and consistent operational framework at an unprecedented scale.

Racing with and against the machines

The challenge for organizations trying to build a strategy for using new and more AI tools lies in designing organizational systems that effectively balance the competitive and cooperative skills of humans and AI. Organizations that seek to strike this balance should consider the following:

Democratize data to foster the continuous development of competitive human and machine skills. AI systems can generate data insights at scale and detect patterns often missed by the human eye, but translating that competitiveness into business growth and agility requires the very human skills of strategic thinking and creativity. To enable this type of collaboration, companies should democratize access to data throughout all levels of their organization. Nearly every role within your organization should be working alongside data analytics to inform how to make the workflows more efficient, make data-driven decisions and ultimately inform a better understanding of how to service the end customer. The more data visibility AI can give your workforce, the greater ability for humans to apply and develop their uniquely competitive skills.

Look outside your own organization’s walls for cooperative human skills. A recent Deloitte study found that nearly half (49%) of traditional workers — full-time employees — updated their skills more than a year ago or have never engaged in skills development, whereas 60% of the alternative workforce — defined as gig workers, freelancers, independent workers, and crowd workers — updated their skills within the past six months. In fact, 44% of alternative workers at large organizations hold a postgraduate degree according to new research by Upwork. This is likely due to the fact that most technical skill sets, according to research by IBM, experience a half-life of 2.5 years. And, according to Upwork’s database, the top in-demand skills are technical and related to web, mobile and software development. If your organization is struggling to keep up with cooperative human skills to work alongside machines, it may be time to engage a broader ecosystem of skills outside your organization.

Don’t let geography limit the skills your company is hiring for. The pandemic ushered in a new era of work as many organizations learned work could be done remotely. Technical work can be done almost anywhere in the world as machines have largely made geography irrelevant to finding the skills you need to cooperate with machines. Enabling remote work strategies will ensure your organization is equipped to capture the ever-changing talent landscape and help you win in the race with and against the machines.

By focusing on the balance of these skills, organizations can reap the benefits of an infinity loop between AI and human competitive skills. In this balance, humans may work toward “coopetition” as an arrangement where parties engage in both cooperative and competitive behavior. In such a relationship with AI systems, humans may leverage both the partnership with machines and their own competitive edge against the machine. This relationship helps to maintain their relevance and indispensability as algorithms are increasingly working as team members or even managers (i.e., algorithmic management).

This formulation offered here helps shape the future of education and skill development, by emphasizing the importance of focusing on skills that give humans a competitive advantage over machines, rather than those that we have already lost to machines. For example, the use of calculators and spell checkers no longer serve as our advantage as we surrendered these tasks to technology long ago.

]]>