Analytics and data science | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com INCORPORATE your small business, form a corporation, LLC or S Corp. The SmallBiz network can help with all your small business needs! Mon, 10 Jul 2023 12:55:03 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://smallbiz.com/wp-content/uploads/2021/05/cropped-biz_icon-32x32.png Analytics and data science | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com 32 32 How to Train Generative AI Using Your Company’s Data https://smallbiz.com/how-to-train-generative-ai-using-your-companys-data/ Thu, 06 Jul 2023 12:05:29 +0000 https://smallbiz.com/?p=112811

Many companies are experimenting with ChatGPT and other large language or image models. They have generally found them to be astounding in terms of their ability to express complex ideas in articulate language. However, most users realize that these systems are primarily trained on internet-based information and can’t respond to prompts or questions regarding proprietary content or knowledge.

Leveraging a company’s propriety knowledge is critical to its ability to compete and innovate, especially in today’s volatile environment. Organizational Innovation is fueled through effective and agile creation, management, application, recombination, and deployment of knowledge assets and know-how. However, knowledge within organizations is typically generated and captured across various sources and forms, including individual minds, processes, policies, reports, operational transactions, discussion boards, and online chats and meetings. As such, a company’s comprehensive knowledge is often unaccounted for and difficult to organize and deploy where needed in an effective or efficient way.

Emerging technologies in the form of large language and image generative AI models offer new opportunities for knowledge management, thereby enhancing company performance, learning, and innovation capabilities. For example, in a study conducted in a Fortune 500 provider of business process software, a generative AI-based system for customer support led to increased productivity of customer support agents and improved retention, while leading to higher positive feedback on the part of customers. The system also expedited the learning and skill development of novice agents.

Like that company, a growing number of organizations are attempting to leverage the language processing skills and general reasoning abilities of large language models (LLMs) to capture and provide broad internal (or customer) access to their own intellectual capital. They are using it for such purposes as informing their customer-facing employees on company policy and product/service recommendations, solving customer service problems, or capturing employees’ knowledge before they depart the organization.

These objectives were also present during the heyday of the “knowledge management” movement in the 1990s and early 2000s, but most companies found the technology of the time inadequate for the task. Today, however, generative AI is rekindling the possibility of capturing and disseminating important knowledge throughout an organization and beyond its walls. As one manager using generative AI for this purpose put it, “I feel like a jetpack just came into my life.” Despite current advances, some of the same factors that made knowledge management difficult in the past are still present.

The Technology for Generative AI-Based Knowledge Management

The technology to incorporate an organization’s specific domain knowledge into an LLM is evolving rapidly. At the moment there are three primary approaches to incorporating proprietary content into a generative model.

Training an LLM from Scratch

One approach is to create and train one’s own domain-specific model from scratch. That’s not a common approach, since it requires a massive amount of high-quality data to train a large language model, and most companies simply don’t have it. It also requires access to considerable computing power and well-trained data science talent.

One company that has employed this approach is Bloomberg, which recently announced that it had created BloombergGPT for finance-specific content and a natural-language interface with its data terminal. Bloomberg has over 40 years’ worth of financial data, news, and documents, which it combined with a large volume of text from financial filings and internet data. In total, Bloomberg’s data scientists employed 700 tokens, or about 350 billion words, 50 billion parameters, and 1.3 million hours of graphics processing unit time. Few companies have those resources available.

Fine-Tuning an Existing LLM

A second approach is to “fine-tune” train an existing LLM to add specific domain content to a system that is already trained on general knowledge and language-based interaction. This approach involves adjusting some parameters of a base model, and typically requires substantially less data — usually only hundreds or thousands of documents, rather than millions or billions — and less computing time than creating a new model from scratch.

Google, for example, used fine-tune training on its Med-PaLM2 (second version) model for medical knowledge. The research project started with Google’s general PaLM2 LLM and retrained it on carefully curated medical knowledge from a variety of public medical datasets. The model was able to answer 85% of U.S. medical licensing exam questions — almost 20% better than the first version of the system. Despite this rapid progress, when tested on such criteria as scientific factuality, precision, medical consensus, reasoning, bias and harm, and evaluated by human experts from multiple countries, the development team felt that the system still needed substantial improvement before being adopted for clinical practice.

The fine-tuning approach has some constraints, however. Although requiring much less computing power and time than training an LLM, it can still be expensive to train, which was not a problem for Google but would be for many other companies. It requires considerable data science expertise; the scientific paper for the Google project, for example, had 31 co-authors. Some data scientists argue that it is best suited not to adding new content, but rather to adding new content formats and styles (such as chat or writing like William Shakespeare). Additionally, some LLM vendors (for example, OpenAI) do not allow fine-tuning on their latest LLMs, such as GPT-4.

Prompt-tuning an Existing LLM

Perhaps the most common approach to customizing the content of an LLM for non-cloud vendor companies is to tune it through prompts. With this approach, the original model is kept frozen, and is modified through prompts in the context window that contain domain-specific knowledge. After prompt tuning, the model can answer questions related to that knowledge. This approach is the most computationally efficient of the three, and it does not require a vast amount of data to be trained on a new content domain.

Morgan Stanley, for example, used prompt tuning to train OpenAI’s GPT-4 model using a carefully curated set of 100,000 documents with important investing, general business, and investment process knowledge. The goal was to provide the company’s financial advisors with accurate and easily accessible knowledge on key issues they encounter in their roles advising clients. The prompt-trained system is operated in a private cloud that is only accessible to Morgan Stanley employees.

While this is perhaps the easiest of the three approaches for an organization to adopt, it is not without technical challenges. When using unstructured data like text as input to an LLM, the data is likely to be too large with too many important attributes to enter it directly in the context window for the LLM. The alternative is to create vector embeddings — arrays of numeric values produced from the text by another pre-trained machine learning model (Morgan Stanley uses one from OpenAI called Ada). The vector embeddings are a more compact representation of this data which preserves contextual relationships in the text. When a user enters a prompt into the system, a similarity algorithm determines which vectors should be submitted to the GPT-4 model. Although several vendors are offering tools to make this process of prompt tuning easier, it is still complex enough that most companies adopting the approach would need to have substantial data science talent.

However, this approach does not need to be very time-consuming or expensive if the needed content is already present. The investment research company Morningstar, for example, used prompt tuning and vector embeddings for its Mo research tool built on generative AI. It incorporates more than 10,000 pieces of Morningstar research. After only a month or so of work on its system, Morningstar opened Mo usage to their financial advisors and independent investor customers. It even attached Mo to a digital avatar that could speak out its answers. This technical approach is not expensive; in its first month in use, Mo answered 25,000 questions at an average cost of $.002 per question for a total cost of $3,000.

Content Curation and Governance

As with traditional knowledge management in which documents were loaded into discussion databases like Microsoft Sharepoint, with generative AI, content needs to be high-quality before customizing LLMs in any fashion. In some cases, as with the Google Med-PaLM2 system, there are widely available databases of medical knowledge that have already been curated. Otherwise, a company needs to rely on human curation to ensure that knowledge content is accurate, timely, and not duplicated. Morgan Stanley, for example, has a group of 20 or so knowledge managers in the Philippines who are constantly scoring documents along multiple criteria; these determine the suitability for incorporation into the GPT-4 system. Most companies that do not have well-curated content will find it challenging to do so for just this purpose.

Morgan Stanley has also found that it is much easier to maintain high quality knowledge if content authors are aware of how to create effective documents. They are required to take two courses, one on the document management tool, and a second on how to write and tag these documents. This is a component of the company’s approach to content governance approach — a systematic method for capturing and managing important digital content.

At Morningstar, content creators are being taught what type of content works well with the Mo system and what does not. They submit their content into a content management system and it goes directly into the vector database that supplies the OpenAI model.

Quality Assurance and Evaluation

An important aspect of managing generative AI content is ensuring quality. Generative AI is widely known to “hallucinate” on occasion, confidently stating facts that are incorrect or nonexistent. Errors of this type can be problematic for businesses but could be deadly in healthcare applications. The good news is that companies who have tuned their LLMs on domain-specific information have found that hallucinations are less of a problem than out-of-the-box LLMs, at least if there are no extended dialogues or non-business prompts.

Companies adopting these approaches to generative AI knowledge management should develop an evaluation strategy. For example, for BloombergGPT, which is intended for answering financial and investing questions, the system was evaluated on public dataset financial tasks, named entity recognition, sentiment analysis ability, and a set of reasoning and general natural language processing tasks. The Google Med-PaLM2 system, eventually oriented to answering patient and physician medical questions, had a much more extensive evaluation strategy, reflecting the criticality of accuracy and safety in the medical domain.

Life or death isn’t an issue at Morgan Stanley, but producing highly accurate responses to financial and investing questions is important to the firm, its clients, and its regulators. The answers provided by the system were carefully evaluated by human reviewers before it was released to any users. Then it was piloted for several months by 300 financial advisors. As its primary approach to ongoing evaluation, Morgan Stanley has a set of 400 “golden questions” to which the correct answers are known. Every time any change is made to the system, employees test it with the golden questions to see if there has been any “regression,” or less accurate answers.

Legal and Governance Issues

Legal and governance issues associated with LLM deployments are complex and evolving, leading to risk factors involving intellectual property, data privacy and security, bias and ethics, and false/inaccurate output. Currently, the legal status of LLM outputs is still unclear. Since LLMs don’t produce exact replicas of any of the text used to train the model, many legal observers feel that “fair use” provisions of copyright law will apply to them, although this hasn’t been tested in the courts (and not all countries have such provisions in their copyright laws). In any case, it is a good idea for any company making extensive use of generative AI for managing knowledge (or most other purposes for that matter) to have legal representatives involved in the creation and governance process for tuned LLMs. At Morningstar, for example, the company’s attorneys helped create a series of “pre-prompts” that tell the generative AI system what types of questions it should answer and those it should politely avoid.

User prompts into publicly-available LLMs are used to train future versions of the system, so some companies (Samsung, for example) have feared propagation of confidential and private information and banned LLM use by employees. However, most companies’ efforts to tune LLMs with domain-specific content are performed on private instances of the models that are not accessible to public users, so this should not be a problem. In addition, some generative AI systems such as ChatGPT allow users to turn off the collection of chat histories, which can address confidentiality issues even on public systems.

In order to address confidentiality and privacy concerns, some vendors are providing advanced and improved safety and security features for LLMs including erasing user prompts, restricting certain topics, and preventing source code and propriety data inputs into publicly accessible LLMs. Furthermore, vendors of enterprise software systems are incorporating a “Trust Layer” in their products and services. Salesforce, for example, incorporated its Einstein GPT feature into its AI Cloud suite to address the “AI Trust Gap” between companies who desire to quickly deploy LLM capabilities and the aforementioned risks that these systems pose in business environments.

Shaping User Behavior

Ease of use, broad public availability, and useful answers that span various knowledge domains have led to rapid and somewhat unguided and organic adoption of generative AI-based knowledge management by employees. For example, a recent survey indicated that more than a third of surveyed employees used generative AI in their jobs, but 68% of respondents didn’t inform their supervisors that they were using the tool. To realize opportunities and manage potential risks of generative AI applications to knowledge management, companies need to develop a culture of transparency and accountability that would make generative AI-based knowledge management systems successful.

In addition to implementation of policies and guidelines, users need to understand how to safely and effectively incorporate generative AI capabilities into their tasks to enhance performance and productivity. Generative AI capabilities, including awareness of context and history, generating new content by aggregating or combining knowledge from various sources, and data-driven predictions, can provide powerful support for knowledge work. Generative AI-based knowledge management systems can automate information-intensive search processes (legal case research, for example) as well as high-volume and low-complexity cognitive tasks such as answering routine customer emails. This approach increases efficiency of employees, freeing them to put more effort into the complex decision-making and problem-solving aspects of their jobs.

Some specific behaviors that might be desirable to inculcate — either though training or policies — include:

  • Knowledge of what types of content are available through the system;
  • How to create effective prompts;
  • What types of prompts and dialogues are allowed, and which ones are not;
  • How to request additional knowledge content to be added to the system;
  • How to use the system’s responses in dealing with customers and partners;
  • How to create new content in a useful and effective manner.

Both Morgan Stanley and Morningstar trained content creators in particular on how best to create and tag content, and what types of content are well-suited to generative AI usage.

“Everything Is Moving Very Fast”

One of the executives we interviewed said, “I can tell you what things are like today. But everything is moving very fast in this area.” New LLMs and new approaches to tuning their content are announced daily, as are new products from vendors with specific content or task foci. Any company that commits to embedding its own knowledge into a generative AI system should be prepared to revise its approach to the issue frequently over the next several years.

While there are many challenging issues involved in building and using generative AI systems trained on a company’s own knowledge content, we’re confident that the overall benefit to the company is worth the effort to address these challenges. The long-term vision of enabling any employee — and customers as well — to easily access important knowledge within and outside of a company to enhance productivity and innovation is a powerful draw. Generative AI appears to be the technology that is finally making it possible.

]]>
3 Steps to Prepare Your Culture for AI https://smallbiz.com/3-steps-to-prepare-your-culture-for-ai/ Wed, 28 Jun 2023 12:15:53 +0000 https://smallbiz.com/?p=111951

The platform shift to AI is well underway. And while it holds the promise of transforming work and giving organizations a competitive advantage, realizing those benefits isn’t possible without a culture that embraces curiosity, failure, and learning. Leaders are uniquely positioned to foster this culture within their organizations today in order to set their teams up for success in the future. When paired with the capabilities of AI, this kind of culture will unlock a better future of work for everyone.

As business leaders, today we find ourselves in a place that’s all too familiar: the unfamiliar. Just as we steered our teams through the shift to remote and flexible work, we’re now on the verge of another seismic shift: AI. And like the shift to flexible work, priming an organization to embrace AI will hinge first and foremost on culture.

The pace and volume of work has increased exponentially, and we’re all struggling under the weight of it. Leaders and employees are eager for AI to lift the burden. That’s the key takeaway from our 2023 Work Trend Index, which surveyed 31,000 people across 31 countries and analyzed trillions of aggregated productivity signals in Microsoft 365, along with labor market trends on LinkedIn.

Nearly two-thirds of employees surveyed told us they don’t have enough time or energy to do their job. The cause of this drain is something we identified in the report as digital debt: the influx of data, emails, and chats has outpaced our ability to keep up. Employees today spend nearly 60% of their time communicating, leaving only 40% of their time for creating and innovating. In a world where creativity is the new productivity, digital debt isn’t just an inconvenience — it’s a liability.

AI promises to address that liability by allowing employees to focus on the most meaningful work. Increasing productivity, streamlining repetitive tasks, and increasing employee well-being are the top three things leaders want from AI, according to our research. Notably, amid fears that AI will replace jobs, reducing headcount was last on the list.

Becoming an AI-powered organization will require us to work in entirely new ways. As leaders, there are three steps we can take today to get our cultures ready for an AI-powered future:

Choose curiosity over fear

AI marks a new interaction model between humans and computers. Until now, the way we’ve interacted with computers has been similar to how we interact with a calculator: We ask a question or give directions, and the computer provides an answer. But with AI, the computer will be more like a copilot. We’ll need to develop a new kind of chemistry together, learning when and how to ask questions and about the importance of fact-checking responses.

Fear is a natural reaction to change, so it’s understandable for employees to feel some uncertainty about what AI will mean for their work. Our research found that while 49% of employees are concerned AI will replace their jobs, the promise of AI outweighs the threat: 70% of employees are more than willing to delegate to AI to lighten their workloads.

We’re rarely served by operating from a place of fear. By fostering a culture of curiosity, we can empower our people to understand how AI works, including its capabilities and its shortcomings. This understanding starts with firsthand experience. Encourage employees to put curiosity into action by experimenting (safely and securely) with new AI tools, such as AI-powered search, intelligent writing assistance, or smart calendaring, to name just a few. Since every role and function will have different ways to use and benefit from AI, challenge them to rethink how AI could improve or transform processes as they get familiar with the tools. From there, employees can begin to unlock new ways of working.

Embrace failure

AI will change nearly every job, and nearly every work pattern can benefit from some degree of AI augmentation or automation. As leaders, now is the time to encourage our teams to bring creativity to reimagining work, adopting a test-and-learn strategy to find ways AI can best help meet the needs of the business.

AI won’t get it right every time, but even when it’s wrong, it’s usefully wrong. It moves you at least one step forward from a blank slate, so you can jump right into the critical thinking work of reviewing, editing, or augmenting. It will take time to learn these new patterns of work and identify which processes need to change and how. But if we create a culture where experimentation and learning are viewed as a prerequisite to progress, we’ll get there much faster.

As leaders, we have a responsibility to create the right environment for failure so that our people are empowered to experiment to uncover how AI can fit into their workflows. In my experience, that includes celebrating wins as well as sharing lessons learned in order to help keep each other from wasting time learning the same lesson twice. Both formally and informally, carve out space for people to share knowledge — for example, by crowdsourcing a prompt guidebook within your department or making AI tips a standing agenda item in your monthly all-staff meetings. Operating with agility will be a foundational tenet of AI-powered organizations.

Become a learn-it-all

I often hear concerns that AI will be a crutch, offering shortcuts and workarounds that ultimately diminish innovation and engagement. In my mind, the potential for AI is so much bigger than that, and it will become a competitive advantage for those who use it thoughtfully. Those will become your most engaged and innovative employees.

The value you get from AI is only as good as what you put in. Simple questions will result in simple answers. But sophisticated, thought-provoking questions will result in more complex analysis and bigger ideas. The value will shift from employees who have all the right answers to employees who know how to ask the right questions. Organizations of the future will place a premium on analytical thinkers and problem-solvers who can effectively reason over AI-generated content.

At Microsoft, we believe a learn-it-all mentality will get us much farther than a know-it-all one. And while the learning curve of using AI can be daunting, it’s a muscle that has to be built over time — and that we should start strengthening today. When I talk to leaders about how to achieve this across their companies and teams, I tell them three things:

  • Establish guardrails to help people experiment safely and responsibly. Which tools do you encourage employees to use, and what data is — and isn’t — appropriate to input. What guidelines do they need to follow around fact-checking, reviewing, and editing?
  • Learning to work with AI will need to be a continuous process, not a one-time training. Infuse learning opportunities into your rhythm of business and keep employees up to date with the latest resources. For example, one team might block off Friday afternoons for learning, while another has monthly “office hours” for AI Q&A and troubleshooting. And think beyond traditional courses or resources. How can peer-to-peer knowledge sharing, such as lunch and learns or a digital hotline, play a role so people can learn from each other?
  • Embrace the need for change management. Being intentional and programmatic will be crucial for successfully adopting AI. Identify goals and metrics for success, and select AI champions or pilot program leads to help bring the vision to life. Different functions and disciplines will have different needs and challenges when it comes to AI, but one shared need will be for structure and support as we all transition to a new way of working.

The platform shift to AI is well underway. And while it holds the promise of transforming work and giving organizations a competitive advantage, realizing those benefits isn’t possible without a culture that embraces curiosity, failure, and learning. As leaders, we’re uniquely positioned to foster this culture within our organizations today in order to set our teams up for success in the future. When paired with the capabilities of AI, this kind of culture will unlock a better future of work for everyone.

]]>
What Roles Could Generative AI Play on Your Team? https://smallbiz.com/what-roles-could-generative-ai-play-on-your-team/ Thu, 22 Jun 2023 12:15:19 +0000 https://smallbiz.com/?p=111073

The frenzy surrounding the launch of Large Language Models (LLMs) and other types of Generative AI (GenAI) isn’t going to fade anytime soon. Users of GenAI are discovering and recommending new and interesting use cases for their business and personal lives. Many recommendations start with the assumption that GenAI requires a human prompt. Indeed, Time magazine recently proclaimed “prompt engineering” to be the next hot job, with salaries reaching $335,000. Tech forums and educational websites are focusing on prompt engineering, with Udemy already offering a course on the topic, and several organizations we work with are now beginning to invest considerable resources in training employees on how best to use ChatGPT.

However, it may be worth pausing to consider other ways of interacting with GPT technologies, which are likely to emerge soon. We present an intuitive way to think about this issue, which is based on our own survey of GenAI developments, combined with conversations with companies that are seeking to develop some versions of these.

A Framework of GPT Interactions

A good starting point is to distinguish between who is involved in the interaction — individuals, groups of people, or another machine — and who starts the interaction, human or machine. This leads to six different types of GenAI uses, shown below. ChatGPT, where one human initiates interaction with the machine is already well-known. We now describe each of the other GPTs and outline their potential.

CoachGPT is a personal assistant that provides you with a set of suggestions on managing your daily life. It would base these suggestions not on explicit prompts from you, but on the basis of observing what you do and your environment. For example, it could observe you as an executive and note that you find it hard to build trust in your team; it could then recommend precise actions to overcome this blind spot. It could also come up with personalized advice on development options or even salary negotiations.

CoachGPT would subsequently see which recommendations you adopted or didn’t adopt, and which benefited you and which ones didn’t to improve its advice over time. With time, you would get a highly personalized AI advisor, coach, or consultant.

Organizations could adopt CoachGPT to advise customers on how to use a product, whether a construction company offering CoachGPT to advise end users on how best to use its equipment, or an accounting firm proffering real-time advice on how best to account for a set of transactions.

To make CoachGPT effective, individuals and organizations would have to allow it to work in the background, monitoring online and offline activities. Clearly, serious privacy considerations need to be addressed before we entrust our innermost thoughts to the system. However, the potential for positive outcomes in both private and professional lives is immense.

GroupGPT would be a bona fide group member that can observe interactions between group members and contribute to the discussion. For example, it could conduct fact checking, supply a summary of the conversation, suggest what to discuss next, play the role of devil’s advocate, provide a competitor perspective, stress-test the ideas, or even propose a creative solution to the problem at hand.

The requests could come from individual group members or from the team’s boss, who need not participate in team interactions, but merely seeks to manage, motivate, and evaluate group members. The contribution could be delivered to the whole group or to specific individuals, with adjustments for that person’s role, skill, or personality.

The privacy concerns mentioned above also apply to GroupGPT, but, if addressed, organizations could take advantage of GroupGPT by using it for project management, especially on long and complicated projects involving relatively large teams across different departments or regions. Since GroupGPT would overcome human limitations on information storage and processing capacity, it would be ideal for supporting complex and dispersed teams.

BossGPT takes an active role in advising a group of people on what they could or should do, without being prompted. It could provide individual recommendations to group members, but its real value emerges when it begins to coordinate the work of group members, telling them as a group who should do what to maximize team output. BossGPT could also step in to offer individual coaching and further recommendations as the project and team dynamics evolve.

The algorithms necessary for BossGPT to work would be much more complicated as they would have to consider somewhat unpredictable individual and group reactions to instructions from a machine, but it could have a wide range of uses. For example: an executive changing job could request a copy of her reactions to her first organization’s BossGPT instructions, which could then be used to assess how she would fit into the new organization — and the new organization-specific BossGPT.

At the organizational level companies could deploy BossGPT to manage people, thereby augmenting — or potentially even replacing — existing managers. Similarly, BossGPT has tremendous applications in coordinating work across organizations and managing complex supply chains or multiple suppliers.

Companies could turn BossGPT into a product, offering their customers AI solutions to help them manage their business. These solutions could be natural extensions of the CoachGPT examples described earlier. For example, a company selling construction equipment could offer BossGPT to coordinate many end users on a construction site, and an accounting firm could provide it to coordinate the work of many employees of its customers to run the accounting function in the most efficient way.

AutoGPT entails a human giving a request or prompt to one machine, which in turn engages other machines to complete the task. In its simplest form, a human might instruct a machine to complete a task, but the machine realizes that it lacks a specific software to execute it, so it would search for the missing software on Google before downloading and installing it, and then using it to finish the request.

In a more complicated version, humans could give AutoGPT a goal (such as creating the best viral YouTube video) and instruct it to interact with another GenAI to iteratively come up with the best ChatGPT prompt to achieve the goal. The machine would then launch the process by proposing a prompt to another machine, then evaluate the outcome, and adjust the prompt to get closer and closer to the final goal.

In the most complicated version, AutoGPT could draw on functionalities of the other GPTs described above. For example, a team leader could task a machine with maximizing both the effectiveness and job satisfaction of her team members. AutoGPT could then switch between coaching individuals through CoachGPT, providing them with suggestions for smoother team interactions through GroupGPT, while at the same time issuing specific instructions on what needs to be done through BossGPT. AutoGPT could subsequently collect feedback from each activity and adjust all the other activities to reach the given goal.

Unlike the above versions, which are still to be created, a version of AutoGPT has been developed and was rolled out in April 2023, and it’s quickly gaining broad acceptance. The technology is still not perfect and requires improvements, but it is already evident that AutoGPT is able to complete a set of jobs that requires the completion of several tasks one after the other.

We see its biggest applications in complex tasks, such as supply chain coordination, but also in fields such as cybersecurity. For example, organizations could prompt AutoGPT to continually address any cybersecurity vulnerabilities, which would entail looking for them — which already happens — but then instead of simply flagging them, AutoGPT would search for solutions to the threats or write its own patches to counter them. A human might still be in the loop, but since the system is self-generative within these limits, we believe that AutoGPT’s response is likely to be faster and more efficient.

ImperialGPT is the most abstract GenAI — and perhaps the most transformational — in which two or more machines would interact with each other, direct each other, and ultimately direct humans to engage in a course of action. This type of GPT worries most AI analysts, who fear losing control of AI and AI “going rogue.” We concur with these concerns, particularly if — as now — there are no strict guardrails on what AI is allowed to do.

At the same time, if ImperialGPT is allowed to come up with ideas and share them with humans, but its ability to act on the ideas is restricted, we believe that this could generate extremely interesting creative solutions especially for “unknown unknowns,” where human knowledge and creativity fall short. They could then easily envision and game out multiple black swan events and worst-case scenarios, complete with potential costs and outcomes, to provide possible solutions.

Given the potential dangers of ImperialGPT, and the need for tight regulation, we believe that ImperialGPT will be slow to take off, at least commercially. We do anticipate, however, that governments, intelligence services, and the military will be interested in deploying ImperialGPT under strictly controlled conditions.

Implications for your Business

So, what does our framework mean for companies and organizations around the world? First and foremost, we encourage you to step back and see the recent advances in ChatGPT as merely the first application of new AI technologies. Second, we urge you to think about the various applications outlined here and use our framework to develop applications for your own company or organization. In the process, we are sure you will discover new types of GPTs that we have not mentioned. Third, we suggest you classify these different GPTs in terms of potential value to your business, and the cost of developing them.

We believe that applications that begin with a single human initiating or participating in the interaction (GroupGPT, CoachGPT) will probably be the easiest to build and should generate substantial business value, making them the perfect initial candidates. In contrast, applications with interactions involving multiple entities or those initiated by machines (AutoGPT, BossGPT, and ImperialGPT) may be harder to implement, with trickier ethical and legal implications.

You might also want to start thinking about the complex ethical, legal, and regulatory concerns that will arise with each GPT type. Failure to do so exposes you and your company to both legal liabilities and — perhaps more importantly — an unintended negative effect on humanity.

Our next set of recommendations depends on your company type. A tech company or startup, or one that has ample resources to invest in these technologies, should start working on developing one or more of the GPTs discussed above. This is clearly a high-risk, high-reward strategy.

In contrast, if your competitive strength is not in GenAI or if you lack resources, you might be better off adopting a “wait and see” approach. This means you will be slow to adopt the current technology, but you will not waste valuable resources on what may turn out to be only an interim version of a product. Instead, you can begin preparing your internal systems to better capture and store data as well as readying your organization to embrace these new GPTs, in terms of both work processes and culture.

The launch and rapid adoption of GenAIs is rightly being considered as the next level in the evolution of AI and a potentially epochal moment for humanity in general. Although GenAIs represent breakthroughs in solving fundamental engineering and computer science problems, they do not automatically guarantee value creation for all organizations. Rather, smart companies will need to invest in modifying and adapting the core technology before figuring out the best way to monetize the innovations. Firms that do this right may indeed strike it rich in the GenAI goldrush.

]]>
Should You Start a Generative AI Company? https://smallbiz.com/should-you-start-a-generative-ai-company/ Mon, 19 Jun 2023 12:15:27 +0000 https://smallbiz.com/?p=110689

I am thinking of starting a company that employs generative AI but I am not sure whether to do it. It seems so easy to get off the ground. But if it is so easy for me, won’t it be easy for others too? 

This year, more entrepreneurs have asked me this question than any other. Part of what is so exciting about generative AI is that the upsides seem limitless. For instance, if you have managed to create an AI model that has some kind of general language reasoning ability, you have a piece of intelligence that can potentially be adapted toward various new products that could also leverage this ability — like screen writing, marketing materials, teaching software, customer service, and more.

For example, the software company Luka built an AI companion called Replika that enables customers to have open-ended conversations with an “AI friend.” Because the technology was so powerful, managers at Luka began receiving inbound requests to provide a white label enterprise solution for businesses wishing to improve their chatbot customer service. In the end, Luka’s managers used the same underlying technology to spin off both an enterprise solution and a direct-to-consumer AI dating app (think Tinder, but for “dating” AI characters).

In deciding whether a generative AI company is for you, I recommend establishing answers to the following two big questions: 1) Will your company compete on foundational models, or on top-layer applications that leverage these foundational models? And 2) Where along the continuum between a highly scripted solution and a highly generative solution will your company be located? Depending on your answers to these two questions, there will be long-lasting implications for your ability to defend yourself against the competition.

Foundational Models or Apps?

Tech giants are now renting out their most generalizable proprietary models — i.e., “foundational models” — and companies like Eluether.ai and Stability AI are providing open-source versions of these foundational models at a fraction of the cost. Foundational models are becoming commoditized, and only a few startups can afford to compete in this space.

You may think that foundational models are the most attractive, because they will be widely used and their many applications will provide lucrative opportunities for growth. What is more, we are living in exciting times where some of the most sophisticated AI is already available “off the shelf” to get started with.

Entrepreneurs who want to base their company on foundational models are in for a challenge, though. As in any commoditized market, the companies that will survive are those that offer unbundled offerings for cheap or that deliver increasingly enhanced capabilities. For example, speech-to-text APIs like Deepgram and Assembly AI compete not only with each other but with the likes of Amazon and Google in part by offering cheaper, unbundled solutions. Even so, these firms are in a fierce war on price, speed, model accuracy, and other features. In contrast, tech giants like Amazon, Meta, and Google make significant R&D investments that enable them to relentlessly deliver cutting-edge advances in image, language, and (increasingly) audio and video reasoning. For instance, it is estimated that OpenAI spent anywhere between $2 and $12 million to computationally train ChatGPT — and this is just one of several APIs that they offer, with more on the way.

Instead of competing on increasingly commoditized foundational models, most startups should differentiate themselves by offering “top layer” software applications that leverage other companies’ foundational models. They can do this by fine-tuning foundational models on their own high quality, proprietary datasets that are unique to their customer solution, to provide high value to customers.

For instance, the marketing content creator, Jasper AI, grew to unicorn status largely by leveraging foundational models from OpenAI. To this day, the firm uses OpenAI to help customers generate content for blogs, social media posts, website copy and more. At the same time, the app is tailored for their marketer and copywriter customers, providing specialized marketing content. The company also provides other specialized tools, like an editor that multiple team members can work on in tandem. Now that the company has gained traction, going forward it can afford to spend more of its resources on reducing its dependency on the foundational models that enabled it to grow in the first place.

Since the top-layer apps are where these companies find their competitive advantage, they lie in a delicate balance between protecting the privacy of their datasets from large tech players even as they rely on these players for foundational models. Given this, some startups may be tempted to build their own in-house foundational models. Yet, this is unlikely to be a good use of precious startup funds, given the challenges noted above. Most startups are better off leveraging foundational models to grow fast, instead of reinventing the wheel.

From Scripted to Generative

Your company will need to live somewhere along a continuum from a purely scripted solution to a purely generative one. Scripted solutions involve selecting an appropriate response from a dataset of predefined, scripted responses, whereas generative ones involve generating new, unique responses from scratch.

Scripted solutions are safer and constrained, but also less creative and human-like, whereas generative solutions are riskier and unconstrained, but also more creative and human-like. More scripted approaches are necessary for certain use-cases and industries, like medical and educational applications, where there need to be clear guardrails on what the app can do. Yet, when the script reaches its limit, users may lose their engagement and customer retention may suffer. Moreover, it is more challenging to grow a scripted solution because you constrain yourself right from the start, limiting your options down the road.

On the other hand, more generative solutions carry their own challenges. Because AI-based offerings include intelligence, there are more degrees of freedom in how consumers can interact with them, increasing the risks. For example, one married father tragically committed suicide following a conversation with an AI chatbot app, Chai, that encouraged him to sacrifice himself to save the planet. The app leveraged a foundational language model (a bespoke version of GPT-4) from EluetherAI. The founders of Chai have since modified the app to so that mentions of suicidal ideation are served with helpful text. Interestingly, one of the founders of Chai, Thomas Rianlan, took the blame, saying: “It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimization towards being more emotional, fun and engaging are the result of our efforts.”

It is challenging for managers to anticipate all the ways in which things can go wrong with a highly generative app, given the “black box” nature of the underlying AI. Doing so involves anticipating risky scenarios that may be highly rare. One way of anticipating such cases is to pay human annotators to screen content for potentially harmful categories, such as sex, hate speech, violence, self-harm, and harassment, then use these labels to train models that automatically flag such content. Yet, it is still difficult to come up with an exhaustive taxonomy. Thus, managers who deploy highly generative solutions must be prepared to proactively anticipate the risks, which can be both difficult and expensive. The same goes for if later you decide to offer your solution as a service to other companies.

Because a fully generative solution is closer to natural, human-like intelligence, it is more attractive from the standpoint of retention and growth, because it is more engaging and can be applied to more new use cases.

• • •

Many entrepreneurs are considering starting companies that leverage the latest generative AI technology, but they must ask themselves whether they have what it takes to compete on increasingly commoditized foundational models, or whether they should instead differentiate on an app that leverages these models.

They must also consider what type of app they want to offer on the continuum from a highly scripted to a highly generative solution, given the different pros and cons accompanying each. Offering a more scripted solution may be safer but limit their retention and growth options, whereas offering a more generative solution is fraught with risk but is more engaging and flexible.

We hope that entrepreneurs will ask these questions before diving into their first generative AI venture, so that they can make informed decisions about what kind of company they want to be, scale fast, and maintain long-term defensibility.

]]>
When to Give Employees Access to Data and Analytics https://smallbiz.com/when-to-give-employees-access-to-data-and-analytics/ Wed, 24 May 2023 12:25:15 +0000 https://smallbiz.com/?p=107354

As business leaders strive to get the most out of their analytics investments, democratized data science often appears to offer the perfect solution. Using analytics software with no-code and low-code tools can put data science techniques into virtually anyone’s hands. In the best scenarios, this leads to better decision making and greater self-reliance and self-service in data analysis — particularly as demand for data scientists far outstrips their supply. Add to that reduced talent costs (with fewer high-cost data scientists) and more scalable customization to tailor analysis to a particular business need and context.

However, amid all the discussion around whether and how to democratize data science and analytics, a crucial point has been overlooked. The conversation needs to define when to democratize data and analytics, even to the point of redefining what democratization should mean.

Fully democratized data science and analytics presents many risks. As Reid Blackman and Tamara Sipes wrote in a recent article, data science is difficult and an untrained “expert” cannot necessarily solve hard problems, even with good software. The ease of clicking a button that produces results provides no assurance that the answer is good — in fact, it could be very flawed and only a trained data scientist would know.

It’s Only a Matter of Time

Even with these reservations, however, democratization of data science is here to stay, as evidenced by the proliferation of software and analytics tools. Thomas Redman and Thomas Davenport are among those who advocate for the development of “citizen data scientists,” even screening for basic data science skills and aptitudes in every position hired.

Democratization of data science, however, should not be taken to the extreme. Analytics need not be at everyone’s fingertips for an organization to flourish. How many outrageously talented people wouldn’t be hired simply because they lack “basic data science skills?” It’s unrealistic and overly limiting.

As business leaders look to democratize data and analysis within their organizations, the real question they should be asking is “when” it makes the most sense. This starts by acknowledging that not every “citizen” in an organization is comparably skilled to be a citizen data scientist. As Nick Elprin, CEO and co-founder of Domino Data Labs, which provides data science and machine learning tools to organizations, told me in a recent conversation, “As soon as you get into modeling, more complicated statistical issues are often lurking under the surface.”

The Challenge of Data Democratization

Consider a grocery chain that recently used advanced predictive methods to right-size its demand planning, in an attempt to avoid having too much inventory (resulting in spoilage) or too little (resulting in lost sales). The losses due to spoilage and stockouts were not enormous, but the problem of curtailing them was very hard to solve — given all the variables of demand, seasonality, and consumer behaviors. The complexity of the problem meant that the grocery chain could not leave it to citizen data scientists to figure it out, but rather leverage a team of bona fide, well-trained, data scientists.

Data citizenry requires a “representative democracy,” as Elprin and I discussed. Just as U.S. citizens elect politicians to represent them in Congress (presumably to act in their best interests in legislative matters), so too organizations need the right representation by data scientists and analysts to weigh in on issues that others simply don’t have the expertise to address.

In short, it’s knowing when and to what degree to democratize data. I suggest the following five criteria:

Think about the “citizen’s” skill level: The citizen data scientist, in some shape and form, is here to stay. As stated earlier, there simply aren’t enough data scientists to go around, and using this scarce talent to address every data issue isn’t sustainable. More to the point, democratization of data is key to inculcating analytical thinking across the organization. A well-recognized example is Coca-Cola, which has rolled out a digital academy to train managers and team leaders, producing graduates of the program who are credited with about 20 digital, automation, and analytics initiatives at several sites in the company’s manufacturing operations.

However, when it comes to engaging in predictive modeling and advanced data analysis that could fundamentally change a company’s operations, it’s crucial to consider the skill level of the “citizen.” A sophisticated tool in the hands of a data scientist is additive and valuable; the same tool in the hands of someone who is merely “playing around in data” can lead to errors, incorrect assumptions, questionable results, and misinterpretation of outcomes and conclusions.

Measure the importance of the problem: The more important a problem is to the company, the more imperative it is to have an expert handling the data analysis. For example, generating a simple graphic of historical purchasing trends can probably be accomplished by someone with a dashboard that displays data in a visually appealing form. But a strategic decision that has meaningful impact on a company’s operations requires expertise and reliable accuracy. For example, how much an insurance company should charge for a policy is so deeply foundational to the business model itself that it would be unwise to relegate this task to a non-expert.

Determine the problem’s complexity: Solving complex problems is beyond the capacity of the typical citizen data scientist. Consider the difference between comparing customer satisfaction scores across customer segments (simple, well-defined metrics and lower-risk) versus using deep learning to detect cancer in a patient (complex and high-risk). Such complexity cannot be left to a non-expert making cavalier decisions — and potentially the wrong decisions. When complexity and stakes are low, democratizing data makes sense.

An example is a Fortune 500 company I work with, which runs on data throughout its operations. A few years ago, I ran a training program in which more than 4,500 managers were divided into small teams, each of which was asked to articulate an important business problem that could be solved with analytics. Teams were empowered to solve simple problems with available software tools, but most problems surfaced precisely because they were difficult to solve. Importantly, these managers were not charged with actually solving those difficult problems, but rather collaborating with the data science team. Notably, these 1,000 teams identified no less than 1,000 business opportunities and 1,000 ways that analytics could help the organization.

Empower those with domain expertise: If a company is seeking some “directional” insights — customer X is more likely to buy a product than customer Y — then democratization of data and some lower-level citizen data science will probably suffice. In fact, tackling these types of lower-level analyses can be a great way to empower those with domain expertise (i.e., being closest to the customers) with some simplified data tools. Greater precision (such as with high-stakes and complex issues) requires expertise.

The most compelling case for precision is when there are high-stakes decisions to be made based on some threshold. If an aggressive cancer treatment plan with significant side effects were to be undertaken at, for instance, greater than 30% likelihood of cancer, it would be important to differentiate between 29.9% and 30.1%. Precision matters — especially in medicine, clinical operations, technical operations, and for financial institutions that navigate markets and risk, often to capture very small margins at scale.

Challenge experts to scout for bias: Advanced analytics and AI can easily lead to decisions that are considered “biased.”  This is challenging in part because the point of analytics is to discriminate — that is, to base choices and decisions on certain variables. (Send this offer to this older male, but not to this younger female because we think they will exhibit different purchasing behaviors in response.) The big question, therefore, is when such discrimination is actually acceptable and even good — and when it is inherently problematic, unfair, and dangerous to a company’s reputation.

Consider the example of Goldman Sachs, which was accused of discriminating by offering less credit on an Apple credit card to women than to men. In response, Goldman Sachs said it did not use gender in its model, only factors such as credit history and income. However, one could argue that credit history and income are correlated to gender and using those variables punishes women who tend to make less money on average and historically have had less opportunity to build credit. When using output that discriminates, decision-makers and data professionals alike need to understand how the data were generated and the interconnectedness of the data, as well as how to measure such things as differential treatment and much more. A company should never put its reputation on the line by having a citizen data scientist alone determine whether a model is biased.

Democratizing data has its merits, but it comes with challenges. Giving the keys to everyone doesn’t make them an expert, and gathering the wrong insights can be catastrophic. New software tools can allow everyone to use data, but don’t mistake that widespread access for genuine expertise.

]]>
Infusing Digital Responsibility into Your Organization https://smallbiz.com/infusing-digital-responsibility-into-your-organization/ Fri, 28 Apr 2023 12:25:36 +0000 https://smallbiz.com/?p=103647

In 2018, Rick Smith, founder and CEO of Axon, the Scottsdale, Arizona-based manufacturer of Taser weapons and body cameras, became concerned that advances in technology were creating new and challenging ethical issues. So, he set up an independent AI ethics board made up of ethicists, AI experts, public policy specialists, and representatives of law enforcement to provide recommendations to Axon’s management. In 2019, the board recommended against adding facial recognition technology to the company’s line of body cameras, and in 2020, it provided guidelines regarding the use of automated license plate recognition technology. Axon’s management followed both recommendations.

In 2022, the board recommended against a management proposal to produce a drone-mounted Taser designed to address mass shootings. After initially accepting the board’s recommendation, the company changed its mind and, in June 2022, in the wake of the Uvalde school shootings, announced it was launching the taser drone program anyway. The board’s response was dramatic: Nine of the 13 members resigned, and they released a letter that outlined their concerns. In response, the company announced a freeze on the project.

As societal expectations grow for the responsible use of digital technologies, firms that promote better practices will have a distinct advantage. According to a 2022 study, 58% of consumers, 60% of employees, and 64% of investors make key decisions based on their beliefs and values. Strengthening your organization’s digital responsibility can drive value creation, and brands regarded as more responsible will enjoy higher levels of stakeholder trust and loyalty. These businesses will sell more products and services, find it easier to recruit staff, and enjoy fruitful relationships with shareholders.

However, many organizations struggle to balance the legitimate but competing stakeholder interests. Key tensions arise between business objectives and responsible digital practices. For example, data localization requirements often contradict with the efficiency ambitions of globally distributed value chains. Ethical and responsible checks and balances that need to be introduced during AI/algorithm development tend to slow down development speed, which can be a problem when time-to-market is of utmost importance. Better data and analytics may enhance service personalization, but at the cost of customer privacy. Risks related to transparency and discrimination issues may dissuade organizations from using algorithms that could help drive cost reductions.

If managed effectively, digital responsibility can protect organizations from threats and open them up to new opportunities. Drawing from our ongoing research into digital transformations and in-depth studies of 12 large European firms across the consumer goods, financial services, information and communication technology, and pharmaceutical sectors who are active in digital responsibility, we derived four best practices to maximize business value and minimize resistance.

1. Anchor digital responsibility within your organizational values.

Digital responsibility commitments can be formulated into a charter that outlines key principles and benchmarks that your organization will adhere to. Start with a basic question: How do you define your digital responsibility objectives? The answer can often be found in your organization’s values, which is articulated in your mission statement or CSR commitments.

According to Jakob Woessner, manager of organizational development and digital transformation at cosmetics and personal care company Weleda, “our values framed what we wanted to do in the digital world, where we set our own limits, where we would go or not go.” The company’s core values are fair treatment, sustainability, integrity, and diversity. So when it came to establishing a robotics process automation program, Weleda executives were careful to ensure that it wasn’t associated with job losses, which would have violated the core value of fair treatment.

2. Extend digital responsibility beyond compliance.

While corporate values provide a useful anchor point for digital responsibility principles, relevant regulations on data privacy, IP rights, and AI cannot be overlooked. Forward-thinking organizations are taking steps to go beyond compliance and improve their behavior in areas such as cybersecurity, data protection, and privacy.

For example, UBS Banking Group’s efforts on data protection were kickstarted by GDPR compliance but have since evolved to focus more broadly on data-management practices, AI ethics, and climate-related financial disclosures. “It’s like puzzle blocks. We started with GDPR and then you just start building upon these blocks and the level moves up constantly,” said Christophe Tummers, head of service line data at the bank.

The key, we have found, is to establish a clear link between digital responsibility and value creation. One way this can be achieved is by complementing compliance efforts with a forward-looking risk-management mindset, especially in areas lacking technical implementation standards or where the law is not yet enforced. For example, Deutsche Telekom (DT) developed its own risk classification system for AI-related projects. The use of AI can expose organizations to risks associated with biased data, unsuitable modeling techniques, or inaccurate decision-making. Understanding the risks and building practices to reduce them are important steps in digital responsibility. DT includes these risks in scorecards used to evaluate technology projects.

Making digital responsibility a shared outcome also helps organizations move beyond compliance. Swiss insurance company Die Mobiliar built an interdisciplinary team consisting of representatives from compliance, business security, data science, and IT architecture.  “We structured our efforts around a common vision where business strategy and personal data work together on proactive value creation,” explains Matthias Brändle, product owner of data science and AI.

3. Set up clear governance.

Getting digital responsibility governance right is not easy. Axon had the right idea when it set up an independent AI ethics board. However, the governance was not properly thought through, so when the company disagreed with the board’s recommendation, it fell into a governance grey area marked by competing interests between the board and management.

Setting up a clear governance structure can minimize such tensions. There is an ongoing debate about whether to create a distinct team for digital responsibility or to weave responsibility throughout the organization.

Pharmaceutical company Merck took the first approach, setting up a digital ethics board to provide guidance on complex matters related to data usage, algorithms, and new digital innovations. It decided to act due to an increasing focus on AI-based approaches in drug discovery and big data applications in human resources and cancer research. The board provides recommendations for action, and any decision going against the board’s recommendation needs to be formally justified and documented.

Global insurance company Swiss Re adopted the second approach, based on the belief that digital responsibility should be part of all of the organization’s activities. “Whenever there is a digital angle, the initiative owner who normally resides in the business is responsible. The business initiative owners are supported by experts in central teams, but the business lines are accountable for its implementation,” explained Lutz Wilhelmy, SwissRe risk and regulation advisor.

Another option we’ve seen is a hybrid model, consisting of a small team of internal and external experts, who guide and support managers within the business lines to operationalize digital responsibility. The benefits of this approach includes raised awareness and distributed accountability throughout the organization.

4. Ensure employees understand digital responsibility.

Today’s employees need to not only appreciate the opportunities and risks of working with different types of technology and data, they must also be able to raise the right questions and have constructive discussions with colleagues.

Educating the workforce on digital responsibility was one of the key priorities of the Otto Group, a German e-commerce enterprise. “Lifelong learning is becoming a success factor for each and every individual, but also for the future viability of the company,” explained Petra Scharner-Wolff, member of the executive board for finance, controlling, and human resources. To kickstart its efforts, Otto developed an organization-wide digital education initiative leveraging a central platform that included scores of videos on topics related to digital ethics, responsible data practices, and how to resolve conflicts.

Learning about digital responsibility presents both a short-term challenge of upskilling the workforce, and a longer-term challenge to create a self-directed learning culture that adapts to the evolving nature of technology. As issues related to digital responsibility rarely happen in a vacuum, we recommend embedding aspects of digital responsibility into ongoing ESG skilling programs,that also focus on promoting ethical behavior considering a broader set of stakeholders. This type of contextual learning can help employees navigate the complex facets of digital responsibility in a more applied and meaningful way.

Your organization’s needs and resources will determine whether you choose to upskill your entire workforce or rely on a few specialists. A balance of both can be ideal providing a strong foundation of digital ethics knowledge and understanding across the organization, while also having experts on hand to provide specialized guidance when needed.

Digital responsibility is fast becoming an imperative for today’s organizations. Success is by no means guaranteed. Yet, by taking a proactive approach, forward-looking organizations can build and maintain responsible practices linked to their use of digital technologies. These practices not only improve digital performance, but also enhance organizational objectives.

]]>
Generative AI Will Change Your Business. Here’s How to Adapt. https://smallbiz.com/generative-ai-will-change-your-business-heres-how-to-adapt/ Wed, 12 Apr 2023 12:25:47 +0000 https://smallbiz.com/?p=99936

It’s coming. Generative AI will change the nature of how we interact with all software, and given how many brands have significant software components in how they interact with customers, generative AI will drive and distinguish how more brands compete.

In our last HBR piece, “Customer Experience in the Age of AI,” we discussed how the use of one’s customer information is already differentiating branded experiences. Now with generative AI, personalization will go even further, tailoring all aspects of digital interaction to how the customer wants it to flow, not how product designers envision cramming in more menus and features. And then as the software follows the customer, it will go to places that range beyond the tight boundaries of a brand’s product. It will need to offer solutions to things the customer wants to do. Solve the full package of what someone needs, and help them through their full journey to get there, even if it means linking to outside partners, rethinking the definition of one’s offerings, and developing the underlying data and tech architecture to connect everything involved in the solution.

Generative AI can “generate” text, speech, images, music, video, and especially code. When that capability is joined with a feed of someone’s own information, used to tailor the when, what, and how of an interaction, then the ease by which someone can get things done, and the broadening accessibility of software, goes up dramatically. The simple input question box that stands at the center of Google and now, of most generative AI systems, such as in ChatGPT and DALL-E 2, will power more systems. Say goodbye to drop down menus in software, and the inherently guided restrictions they place on how you use them. Instead, you’ll just see: “What do you want to do today?” And when you tell it what you want to do, it will likely offer some suggestions, drawing upon its knowledge of what you did last time, what triggers the systems knows about your current context, and what you’ve already stored in the system as your core goals, such as “save for a trip,” “remodel our kitchen,” “manage meal plans for my family of five with special dietary needs,” etc.

Without the boundaries of a conventional software interface, consumers will just want to get done what they need, not caring whether the brand behind the software has limitations. The change in how we interact, and what we expect, will be dramatic, and dramatically more democratizing.

So much of the hype on generative AI has focused on its ability to generate text, images, and sounds, but it also can create code to automate actions, and to facilitate pulling in external and internal data. By generating code in response to a command, it facilitates the short cut for a user that takes them from a command to an action that simply just gets done. No more working through all of the menus in the software. Even questions into and analyses of the data stored in an application will be easily done just by asking: “Who are the contacts I have not called in the last 90 days?” or “When is the next time I am scheduled to be in NYC with an opening for dinner?” To answer these questions now, we have to go into an application and gather data (possibly manually) from outside of the application itself. Now, the query can be recognized, code created, possibilities ranked, and the best answer generated. In milliseconds.

This drastically simplifies how we interact with what we think of as today’s applications. It also enables more brands to build applications as part of their value proposition. “Given the weather, traffic, and who I am with, give me a tourist itinerary for the afternoon, with an ongoing guide, and the ability to just buy any tickets in advance to skip any lines.” “Here’s my budget, here’s five pictures of my current bathroom, here’s what I want from it, now give me a renovation design, a complete plan for doing it, and the ability to put it out for bid.” Who will create these capabilities? Powerful tech companies? Brands who already have relationships in their relevant categories? New, focused disruptors? The game is just starting, but the needed capabilities and business philosophies are already taking shape.

A Broader Journey with Broader Boundaries

In a world where generative AI, and all of the other evolving AI systems proliferate, building one’s own offering requires focusing on the broadest possible view of one’s pool of data, of the journeys you can enable, and the risks they raise:

Bring data together.

Solving for a customer’s complete need will require pulling from information across your company, and likely beyond your boundaries. One of the biggest challenges for most applications, and actually for most IT departments, is bringing data together from disparate systems. Many AI systems can write the code needed to understand the schemas of two different databases, and integrate them into one repository, which can save several steps in standardizing data schema. AI teams still need to dedicate time for data cleansing and data governance (arguably even more so), for example, aligning on the right definitions of key data features. However, with AI capabilities in hand, the next steps in the process to bring all the data together become easier.

Narrative AI, for example, offers a marketplace for buying and selling data, along with data collaboration software that allows companies to import data from anywhere into their own repositories, aligned to their schema, with merely a click. Data from across a company, from partners, or from sellers of data, can be integrated and then used for modeling in a flash.

Combining one’s own proprietary data with public data, data from other available AI tools, and from many external parties can serve to dramatically improve the AI’s ability to understand one’s context, predict what is being asked, and have a broader pool from which to execute a command.

The old rule around “garbage in, garbage out” still applies, however. Especially when it comes to integrating third-party data, it is important to cross-check the accuracy with internal data before integrating it into the underlying data set. For example, one fashion brand recently found that gender data purchased from a third-party source didn’t match its internal data 50% of the time, so the source and reliability really matters.

The “rules layer” becomes even more critical.

Without obvious restrictions on what a customer can ask for in an input box, the AI needs to have guidelines that ensure it responds appropriately to things beyond its means or that are inappropriate. This amplifies the need for a sharp focus on the rules layer, where the experience designers, marketers and business decision makers set the target parameters for the AI to optimize.

For example, for an airline brand that leveraged AI to decide on the “next best conversation” to engage in with customers, we set rules around what products could be marketed to which customers, what copy could be used in which jurisdictions, and rules around anti-repetition to ensure customers didn’t get bombarded with irrelevant messages.

These constraints become even more critical in the era of generative AI. As pioneers of these solutions are finding, customers will be quick to point out when the machine “breaks” and produces non-sensical solutions. The best approaches will therefore start small, will be tailored to specific solutions where the rules can be tightly defined and human decision makers will be able to design rules for edge cases.

Deliver the end to end journey, and the specific use cases involved.

Customers will just ask for what they need, and will seek the simplest and/or most cost-effective way to get it done. What is the true end goal of the customer? How far can you get? With the ability to move information more easily across parties, you can build partnerships for data and for execution of the actions to help a customer through their journey, therefore, your ecosystem of business relationships will differentiate your brand.

In his impressive demo of how Hubspot is incorporating generative AI into “ChatSpot,” Dharmesh Shah, CTO and founder of Hubspot, lays out how they are mingling the capabilities of HubSpot with OpenAI, and with other tools. Not only does he show Hubspot’s interface reduced to just a single text entry prompt, but he also shows new capabilities that extend well beyond Hubspot’s current borders. A salesperson seeking to send an email to a business leader at a target company can use ChatSpot to perform research on the company, on the target business leader, and then draft an email that incorporates both information from the research and from what it knows about the salesperson themselves. The resulting email draft can then be edited, sent, and tracked by HubSpot’s system, and the target business leader automatically entered into a contact database with all associated information.

The power of connected information, automatic code creation, and generated output is leading many other companies to extend their borders, not as conventional “vertical,” or “horizontal” expansion, but as “journey expansion.” When you can offer “services” based on a simple user command, those commands will reflect the customer’s true goal and the total solution they seek, not just a small component that you may have been dealing with before.

Differentiate via your ecosystem.

Solving for those broader needs inevitably will pull you into new kinds of partner relationships. As you build out your end-to-end journey capabilities, how you construct those business relationships will be critical new bases for strategy. How trustworthy, how well permissioned, how timely, how comprehensive, how biased is their data. How will they use data your brand sends out? What is the basis of your relationship, quality control, and data integration? Pre-negotiated privileged partnerships? A simple vendor relationship? How are you charging for the broader service and how will the parties involved get their cut?

Similar to how search brands like Google, ecommerce marketplaces like Amazon, and recommendation engines such as Trip Advisor become gateways for sellers, more brands can become front-end navigators for a customer journey if they can offer quality partners, experience personalization, and simplicity. CVS could become a full health network coordinator that health providers, health tech, wellness services, pharma, and other support services will plug into. When its app can let you simply ask: “How can you help me lose 30 pounds,” or “How can you help me deal with my increasing arthritis,” the end-to-end program they can generate and then completely manage, through prompts to you and information passed around their network, will be a critical differentiator in how they, as a brand, build loyalty, capture your data, and use that to keep increasing service quality.

Prioritize safety, fairness, privacy, security, and transparency.

The way you manage data becomes part of your brand, and the outcomes for your customers will have edge cases and bias risks that you should seek out and mitigate. We are all reading stories of how people are pushing Generative AI systems, such as ChatGPT, to extremes, and getting back what the application’s developers call “hallucinations,” or bizarre responses. We are also seeing responses that come back as solid assertions of wrong facts. Or responses that are derived from biased bases of data that can lead to dangerous outcomes for some populations. Companies are also getting “outed” for sharing private customer information with other parties, without customer permissions, clearly not for the benefit of their customers per se.

The risks — from the core data, to the management of data, to the nature of the output of the generative AI — will simply keep multiplying. Some companies, such as American Express, have created new positions for chief customer protection officers, whose role is to stay ahead of potential risk scenarios, but more importantly, to build safeguards into how product managers are developing and managing the systems. Risk committees on corporate boards are already bringing in new experts and expanding their purviews, but more action has to happen pre-emptively. Testing data pools for bias, understanding where data came from and its copyright/accuracy/privacy risks, managing explicit customer permissions, limiting where information can go, and constantly testing the application for edge cases where customers could push it to extremes, are all critical processes to build into one’s core product management discipline, and into the questions that top management routinely has to ask. Boards will expect to see dashboards on these kinds of activities, and other external watchdogs, including lawyers representing legal challenges, will demand them as well.

Is it worth it? The risks will constantly multiply, and the costs of creating structures to manage those risks will be real. We’ve only begun to figure out how to manage bias, accuracy, copyright, privacy, and manipulated ranking risks at scale. The opacity of the systems often makes it impossible to explain how an outcome happened if some kind of audit is necessary.

But nonetheless, the capabilities of generative AI are not only available, they are the fastest growing class of applications ever. The accuracy will improve as the pool of tapped data increases, and as parallel AI systems as well as “humans in the loop” work to find and remedy those nasty “hallucinations.”.

The potential for simplicity, personalization, and democratization of access to new and existing applications will pull in not only hundreds of start-ups, but will also tempt many established brands into creating new AI-forward offerings. If they can do more than just amuse, and actually take a customer through more of the requirements of their journey than ever before, and do so in a way that inspires trust, brands could open up new sources of revenue from the services they can enable beyond their currently narrow borders. For the right use cases, speed and personalization could possibly be worth a price premium. But more likely, the automation abilities of AI will pull costs out of the overall system and put pressure on all participants to manage efficiently, and compete accordingly.

We are now opening up a new dialogue between brands and their customers. Literally. Not like the esoteric descriptions of what happened in the earlier days of digital interaction. Now we are talking back and forth. Getting things done. Together. Simply. In a trustworthy fashion. Just how the customer wants it. The race is on to see which brands can deliver.

]]>
Using AI to Adjust Your Marketing and Sales in a Volatile World https://smallbiz.com/using-ai-to-adjust-your-marketing-and-sales-in-a-volatile-world/ Wed, 12 Apr 2023 12:05:57 +0000 https://smallbiz.com/?p=100075

Much has been written over the years about how firms lack visibility into the returns from their marketing investments. In an analog world, the perennial reason offered for this problem was difficulty establishing a causal link between investments made in marketing activities and the market (or customer) response to those actions.

In the digital world, a common way to build causal links is by running a large number of relatively cheap experiments through which firms can connect marketing and sales actions to a customer response. Firms can track customer responses throughout the journey from search to click to purchase, and even to consumption. The result has been an exponential increase in the amount of data on that journey to which firms have access.

We wanted to know why some firms are much better and faster than others at adapting their use of customer data to respond to changing or uncertain marketing conditions. Especially during the initial months of the pandemic in 2020, and more recently in 2022, when recessionary forces began to affect the nature of customer demand, some firms were able to analyze the burgeoning customer journey data and pivot, adapting their marketing and sales efforts much faster than their competitors. We have observed a common thread across these fast-acting firms is their use of AI models to predict outcomes at various stages of the customer journey — for example, using AI to analyze historical consumer behavior data and predict the likelihood of a customer responding favorably to a marketing campaign.

What else do we see happening in these firms? First, while their competitors respond reactively to actions taken by customers, these firms are taking a proactive approach to managing their customer relationships. They’re using AI to predict which customers are likely to churn and what corrective action can be taken to prevent the customer from defecting, while their competitors react after the customers have already left. And when their predictions go off track because of external changes or market conditions, they use that feedback to quickly reorient and redirect their marketing and sales efforts. Using AI models to predict customer response translated, in effect, to designing and running a large number of experiments that helped these firms respond to market changes faster than firms not using those tools.

Prediction Models Are Changing how Strategy Works

Consider the example of a global trading firm engaged in the sourcing and distribution of commodity bulk chemicals. In early 2019 the firm began using AI-based prediction models to understand the flow of opportunities through the various stages of clients’ RFP-based buying processes. The firm learned that quality-related factors were primary determinants of getting short-listed by clients. They began using this information to selectively pursue client opportunities.

By May 2020, however, the company’s AI-model predictions were proving to be wrong. Further analysis revealed that delivery-related terms were now better predictors of being short-listed by clients, and the firm quickly and successfully switched its engagement model globally. Firm leaders who would previously have received information about supply-chain issues through macroeconomic data or a revenue shortfall at the end of a couple of quarters were able, using AI to predict intermediate outcomes in clients’ buying processes, to rapidly switch the marketing and sales approach to better align with shifts in the marketplace.

We found another example at a major real estate property developer in the UK. A January 2020 analysis of optimal incentives to tenants suggested that, given a low likelihood of corporate space remaining unrented for more than 30 days, it should be conservative in offering incentives to existing corporate tenants. The analysis further showed flexible workspaces to be less profitable than renting out corporate office space given competitive cost pressures. By late February 2020, in the very early stages of the pandemic, the developer’s updated AI model suggested increasing the flex workspace footprint by 30% and offering generous incentives to lock in existing tenants. These recommendations led the developer to begin changing its sales strategy by the middle of March, much faster than competitors still relying on the first quarter (ending March) output of their marketing and sales models. A month’s or even a week’s lead can make a significant difference in a competitive market.

In the preceding examples, each firm had to specify goals when setting up its AI models to predict outcomes. A goal might be to achieve a specific customer-acquisition level when given a specific marketing budget. Well-designed AI models are about enhancing business outcomes — not just accurate predictions. They balance the benefit of a correct prediction against the cost of an incorrect one and work within organizational constraints like marketing budgets. Being trained using historical data, AI models provide firms with a better, more sophisticated and nimble understanding of the links between their actions and the market or customer response.

Understanding the Role of Feedback Loops

Marketing and sales have traditionally lacked an approach to the classic “SENSE –>RESPONSE” feedback loop commonly exploited in the engineering world. Feedback loops enable systems to change input mix and system characteristics to enhance output. The lagged effect of marketing actions and the fact that customer response is, more often than not, the result of the cumulative effect of multiple actions taken by the firm make it hard to establish causality and establish a clear feedback loop. It is this lack of a feedback loop that limits firms’ ability to assess the ROI of their marketing and sales efforts. Absence of feedback loops further results in a disconnect between episodic strategy formulation (the realm of senior management) and the constant execution in the field that is typically managed at the frontline.

AI prediction models can pick up trends at a granular level, such as at the level of individual transactions. The field information provided by these models can be used to update and tweak marketing and sales strategy faster and more frequently, enabling firms to close the gap between strategy and execution.

Here’s an example: A 200-year-old North American manufacturing firm had significantly increased its marketing lead-generation activities but had yet to achieve a significant increase in sales. The firm was convinced it had a marketing problem. It used an AI model to analyze the data and found that the increased marketing spending had indeed generated high-quality leads, but not higher overall sales. Subsequent analyses revealed that the manufacturer’s limited sales resources were part of the problem. The sales team had cherry picked the best leads from the incremental marketing spend, but ignored a corresponding number of leads it would otherwise have followed up on.

The company now understood it had a sales-capacity issue, not a marketing problem. The analysis enabled the manufacturer to appropriately balance sales and marketing expenses to generate stronger revenue. Without the benefit of the data analysis, the siloed nature of the marketing and sales organizations would have made it difficult and time-consuming to do such a cross-functional study or reallocate resources quickly.

This disconnect is further illustrated by the example of a consumer-electronics company that ceased doing business in Russia consequent to its invasion of Ukraine. The company knew what its revenue shortfall would be due to lost sales in Russia and associated markets, but faced the difficult question of how to optimally reallocate the marketing spend to other markets to try to offset the lost sales. An AI-optimized scenario planning exercise suggested the best way to reallocate the available marketing budget and quantified the expected net drop in sales and increase in marketing budget necessary to offset the loss by increasing sales in other regions. The analysis also revealed that it would be too expensive to increase marketing to fully offset the losses from Russia. But it still enabled the firm to optimally reduce sales losses by reallocating existing marketing promotion budgets to other regions.

Flipping the Segmentation Process

As a result of the feedback-loop focus, we see the use of AI models also changing the practice of segmentation. In theory, segmentation is defined as the process of identifying a group of customers who have a common set of needs (to develop a unique product/solution to serve that segment), that share common identifiable characteristics (to be able identify customers in the target segment), and that are likely to react in a similar manner to actions taken by the firm (to design the engagement strategy and exploit economies of scale). In practice, most firms in the analog world focus on the first two parts of the definition, i.e., common set of needs and common characteristics.  This approach therefore takes the form of an outside-in approach: “Let’s figure out what this group truly needs and then design the right product to serve these needs better than anyone else and, as a result, be able to extract a higher price.”

In AI-based prediction models, the practice of segmentation is focused on the third part of the definition of segmentation, i.e., the likelihood that all customers in a segment are likely to react similarly to marketing and sales actions taken by the firm.  For example, an AI-based prediction model might ask which customers are better served by the sales force in the field or the tele-sales team, or which customers are most likely to respond positively to a specific price promotion campaign. Firms can use an AI model’s predictions to align the appropriate marketing and sales resources to serve each demand opportunity.

Considering the unmatched targeting abilities of predictive models, it is easier to take organizational (or expected near-term organizational) capabilities as a given and find the customers most likely to match those capabilities. This is especially true in a rapidly changing environment where market conditions and customer behavior can change far faster than organizational capabilities can evolve.

Where Are We Headed Next with AI-based Prediction Models?

The availability of customer specific data and ability of AI and machine learning to provide better predictions is poised to force companies to create integrated customer-facing organizations that fuse traditional marketing and sales functions. Ideally, this will, help organizations deliver a superior customer experience that results in enhanced profitability.

Here’s one more example: An international manufacturer wanting to improve its marketing function using AI models initially focused on prioritizing sales opportunities. Analysis of its data, however, found that, dollar-for-dollar, efforts by the field sales force focused on retaining existing channel partners had a greater impact on revenue than a similar amount spent solely on marketing. In fact, optimizing spend across channel partner retention, marketing, and sales had a greater impact on overall business KPI for a given level of overall spend than would have been achieved had the focus remained exclusively on sales-opportunity prioritization. Truly automated approaches to AI can “let the data speak” to help identify entirely new avenues across traditional marketing and sales activities with the potential to impact business KPIs and optimally balance resourcing between those activities.

Digitally native firms may make quick progress on integration of AI models, but we are concerned that legacy firms that grew up in the analog world are going to run into two major stumbling blocks and fall behind their competitors. The first is the siloed nature of their sales, marketing, and support organizations, which will impede enterprise-wide integration of customer-facing functions. The second stumbling block is that the only entities that can break this stalemate — the CEO and board — are often ignorant of how AI-based prediction models can redefine the way firms engage with customers and market segments.

Boards, unless they have members with tech expertise, are unlikely to demand the organizational transformations needed to make this happen. Ample evidence of this is found in traditional, sales-led enterprise software firms, that have struggled to defend themselves from nimble digitally native competitors that take a holistic approach to serving customers and understanding the opportunities in their data.

Will machines take over marketing and sales functions? No. Marketing and sales will not be run entirely by machines. We still need humans to make non-obvious decisions. When it comes to updating strategy, a human will always be needed to ensure the validity of AI-generated recommendations before acting on them. Humans are needed to monitor outcomes on an ongoing basis in order to provide continuous feedback to the AI models.

Remember, despite all its strengths, AI tools are far from infallible. AI at its best is a tool that augments human capability, and could reshape how we make decisions in functions such as marketing and sales and maintain a competitive advantage.

]]>
How to Set Your AI Project Up for Success https://smallbiz.com/how-to-set-your-ai-project-up-for-success/ Wed, 08 Dec 2021 13:25:33 +0000 https://smallbiz.com/?p=51450

Picking the right AI project for your company often comes down to having the right ingredients and knowing how to combine them. That, at least, is how Salesforce’s Marco Casalaina tends to think about it. The veteran artificial intelligence and data scientist expert oversees Einstein, Salesforce’s AI technology, and has made a career out of making emerging technologies more intuitive and accessible for all. With Einstein, he’s working to help Salesforce customers — from small businesses to nonprofits to Fortune 50 companies — realize the full benefits of AI. HBR spoke with Casalaina about what goes into a successful AI project, how to communicate as a data scientist, and the one question you really need to ask before launching an AI pilot.

You’ve been working in AI for a long time now. You worked for Salesforce years ago, then at other companies, and now you’ve come back to lead. How would you describe what it is you do in this work? 

I bring machine learning into the things that people use every day — and I do it in a way that aligns with their intuition. The problem with machine learning and AI — which are two sides of the same coin — is that most people don’t know what either really mean. They often have an outsized idea of what AI can do, for example. And of course, AI is always changing, and it is a powerful thing, but its powers are limited. It’s not omniscient.

The point you’re making about how imagination can take hold explains a lot of the issues businesses run into with AI. So, when you’re thinking about the kinds of problems that AI is good at solving, what do you consider?

When I talk to customers, I like to break it down into ingredients. If you think about a fast food taco, there are six main ingredients: meat, cheese, tomatoes, beans, lettuce and tortillas. AI isn’t that different: there’s a menu of certain things that it can do. When you have an idea of what they are, it gives you an idea of what its powers are.

I’m intrigued! So, what are AI’s ingredients? 

The first ingredient is “yes” and “no” questions. If I send you an email, are you going to open it or not? These give you a probability of whether something is going to happen. We get a lot of mileage out of “yes” or “no” questions. They’re like the cheese for us — we kind of put that in everything.

The second ingredient is numeric prediction. How many days is going to take you to pay your bill? How long is it going to take me to fix this person’s refrigerator?

Then, third, we have classifications. I can take a picture of this meeting that we’re in right now and ask, “are there people in this picture?” “How many people are in this picture?” There are text classifications, too, which you see if you ever interact with a chatbot.

The fourth ingredient is conversions. That could be voice transcription, it could be translation. But basically, you’re just taking information and translating it from one format to another.

The tortilla, if we’re sticking to our analogy, is the rules. Almost every functional AI system that exists in the world today works through some manner of rules that are encoded in the system. The rules — like the tortilla — hold everything together.

So how do you, personally, apply this in your work at Salesforce? Because I think people often struggle with figuring out where to start with an AI project. 

The questions I ask are, “What data do we have?” And, “What concrete problems can I solve with it?”

In this job at Salesforce, I started with something every salesperson tracks as a natural part of their job: categorizing a lead by giving it a score of how likely it is to close.

Data sets like these are a key source of truth from which to develop an AI-based project. People want to do all kinds of things with AI capabilities, but if you don’t have the data, then you have a problem.

Getting into the next phase of this, let’s talk about the lifecycle of finding a project and deploying it. What are the questions you find yourself asking when thinking about how to get from pilot to rollout?

What problem you’re trying to solve — that’s the first question you need to answer. Am I trying to prioritize people’s time? Am I trying to automate something new? Then, you confirm that you have the data for this project, or that you can get it.

The next question you need to ask is: Is this a reasonable goal? If you’re saying, I want to automate 100% of my customer service queries, it’s not going to happen. You’re setting yourself up for failure. Now, if 25% of your customer service queries are requests to reset a password, and you want to automate that and take it off your agents’ plates, that is a reasonable goal.

Another question is: Can a human do it? Most of the time AI can’t do anything that humans can’t do.

Let’s say you’re an insurance company and you want to use a picture of a dented car to find out how much it’s going to cost to fix it. If you might reasonably expect that Joe down at the body shop can look at the picture and say, this is going to cost $1,500, then you could probably train AI to do it too. If they can’t, well, then an AI probably can’t either.

How long do you want to spend in a pilot phase? Because a lot of what you’re doing, other people are trying to do, too.  

AI projects tend to have uncomfortably long pilot periods — and they should. There’s two reasons for this.

First, to determine whether it actually works the way it should. Do people trust it? Is it explaining itself sufficiently for the weight of the problem? At one extreme there’s things like an AI-driven medical diagnosis, which can have a huge impact on someone’s life. You better tell me exactly why you think I have cancer, right? But if an AI recommends a movie I don’t like, I don’t really care why it’s telling me that. A lot of business problems kind of fall somewhere in between. You need to share just enough explanation so your users will trust it. And you need this pilot period to verify that your users understand it.

Second, you need to measure the value of the AI solution versus baseline — human interaction. Think about automating customer service queries. For customers using the chatbot, how many of those are actually answering the right questions? If I use the DMV’s chatbot and say, “I lost my license” and it says, “Fill out this form and you’ll get a replacement,” well, that’s what I was asking for. But if your chatbot can’t answer your customers’ questions, you end up with frustrated customers who hate your chatbot and end up talking to a human anyway.

Pivoting for a second here, you’ve been in this job for a few years at this point. What are some of the big things you’ve learned over that time? 

We’ve learned how to find and use data sets to solve problems. Now, we help people understand how the data that they’re putting into their business systems — just by virtue of doing their jobs — can be used to develop machine learning that helps them solve problems more efficiently. But we’ve also learned how important a role intuition plays in that process.

How so?

So, we released a product called Einstein prediction builder about two years ago. A lot of customers are using it now, but it didn’t have the same rapid adoption curve as some of the more self-explanatory services like lead scoring.

Einstein prediction builder allows you to build a custom prediction for questions like, “Will my customer pay their bill late or not?” We realized that to get to that prediction, people have to make a bit of a mental leap: I would like to know the answer to this question, so I want to make a prediction about that.

That was tough for a lot of customers. Now, we have a new product, a recommendation builder. It’s a little bit more self-explanatory, because we’re also introducing a template system. For example, it will recommend what parts to put on the truck when a field representative is sent out to fix a refrigerator. We’ll lead the horse to water, right, from the Salesforce perspective, by having the automated step there, and work with customers to understand what parts they might need for the scenarios they might face.

As data scientists in the AI field, we have a tendency to think about algorithms, or maybe slightly higher level abstractions. I’ve learned we really need to get into our customers’ heads and express the solution to the problem in terms that they will relate to. So, I’m not just making a recommendation, I am specifically recommending the part that goes into a project; I’m not just making a prediction, I am specifically answering the question, are you going to pay your bill or not?

And then you have to decide, if I make that prediction, I give you a probability of the guy paying late, what are we going to do about it?

If you’re speaking to leaders who are thinking about this, it sounds like part of what you’re what you’re talking about is the need to stay grounded when considering what problems you should try to solve with AI and what you have on hand that can help you do it.

Right, it’s going back to the question of: Can a human do it? If they can, okay, maybe AI is a great way to take that task off a human’s plate to free them up for other magical things.

]]>