Enterprise computing | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com INCORPORATE your small business, form a corporation, LLC or S Corp. The SmallBiz network can help with all your small business needs! Mon, 26 Jun 2023 12:08:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://smallbiz.com/wp-content/uploads/2021/05/cropped-biz_icon-32x32.png Enterprise computing | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com 32 32 What Roles Could Generative AI Play on Your Team? https://smallbiz.com/what-roles-could-generative-ai-play-on-your-team/ Thu, 22 Jun 2023 12:15:19 +0000 https://smallbiz.com/?p=111073

The frenzy surrounding the launch of Large Language Models (LLMs) and other types of Generative AI (GenAI) isn’t going to fade anytime soon. Users of GenAI are discovering and recommending new and interesting use cases for their business and personal lives. Many recommendations start with the assumption that GenAI requires a human prompt. Indeed, Time magazine recently proclaimed “prompt engineering” to be the next hot job, with salaries reaching $335,000. Tech forums and educational websites are focusing on prompt engineering, with Udemy already offering a course on the topic, and several organizations we work with are now beginning to invest considerable resources in training employees on how best to use ChatGPT.

However, it may be worth pausing to consider other ways of interacting with GPT technologies, which are likely to emerge soon. We present an intuitive way to think about this issue, which is based on our own survey of GenAI developments, combined with conversations with companies that are seeking to develop some versions of these.

A Framework of GPT Interactions

A good starting point is to distinguish between who is involved in the interaction — individuals, groups of people, or another machine — and who starts the interaction, human or machine. This leads to six different types of GenAI uses, shown below. ChatGPT, where one human initiates interaction with the machine is already well-known. We now describe each of the other GPTs and outline their potential.

CoachGPT is a personal assistant that provides you with a set of suggestions on managing your daily life. It would base these suggestions not on explicit prompts from you, but on the basis of observing what you do and your environment. For example, it could observe you as an executive and note that you find it hard to build trust in your team; it could then recommend precise actions to overcome this blind spot. It could also come up with personalized advice on development options or even salary negotiations.

CoachGPT would subsequently see which recommendations you adopted or didn’t adopt, and which benefited you and which ones didn’t to improve its advice over time. With time, you would get a highly personalized AI advisor, coach, or consultant.

Organizations could adopt CoachGPT to advise customers on how to use a product, whether a construction company offering CoachGPT to advise end users on how best to use its equipment, or an accounting firm proffering real-time advice on how best to account for a set of transactions.

To make CoachGPT effective, individuals and organizations would have to allow it to work in the background, monitoring online and offline activities. Clearly, serious privacy considerations need to be addressed before we entrust our innermost thoughts to the system. However, the potential for positive outcomes in both private and professional lives is immense.

GroupGPT would be a bona fide group member that can observe interactions between group members and contribute to the discussion. For example, it could conduct fact checking, supply a summary of the conversation, suggest what to discuss next, play the role of devil’s advocate, provide a competitor perspective, stress-test the ideas, or even propose a creative solution to the problem at hand.

The requests could come from individual group members or from the team’s boss, who need not participate in team interactions, but merely seeks to manage, motivate, and evaluate group members. The contribution could be delivered to the whole group or to specific individuals, with adjustments for that person’s role, skill, or personality.

The privacy concerns mentioned above also apply to GroupGPT, but, if addressed, organizations could take advantage of GroupGPT by using it for project management, especially on long and complicated projects involving relatively large teams across different departments or regions. Since GroupGPT would overcome human limitations on information storage and processing capacity, it would be ideal for supporting complex and dispersed teams.

BossGPT takes an active role in advising a group of people on what they could or should do, without being prompted. It could provide individual recommendations to group members, but its real value emerges when it begins to coordinate the work of group members, telling them as a group who should do what to maximize team output. BossGPT could also step in to offer individual coaching and further recommendations as the project and team dynamics evolve.

The algorithms necessary for BossGPT to work would be much more complicated as they would have to consider somewhat unpredictable individual and group reactions to instructions from a machine, but it could have a wide range of uses. For example: an executive changing job could request a copy of her reactions to her first organization’s BossGPT instructions, which could then be used to assess how she would fit into the new organization — and the new organization-specific BossGPT.

At the organizational level companies could deploy BossGPT to manage people, thereby augmenting — or potentially even replacing — existing managers. Similarly, BossGPT has tremendous applications in coordinating work across organizations and managing complex supply chains or multiple suppliers.

Companies could turn BossGPT into a product, offering their customers AI solutions to help them manage their business. These solutions could be natural extensions of the CoachGPT examples described earlier. For example, a company selling construction equipment could offer BossGPT to coordinate many end users on a construction site, and an accounting firm could provide it to coordinate the work of many employees of its customers to run the accounting function in the most efficient way.

AutoGPT entails a human giving a request or prompt to one machine, which in turn engages other machines to complete the task. In its simplest form, a human might instruct a machine to complete a task, but the machine realizes that it lacks a specific software to execute it, so it would search for the missing software on Google before downloading and installing it, and then using it to finish the request.

In a more complicated version, humans could give AutoGPT a goal (such as creating the best viral YouTube video) and instruct it to interact with another GenAI to iteratively come up with the best ChatGPT prompt to achieve the goal. The machine would then launch the process by proposing a prompt to another machine, then evaluate the outcome, and adjust the prompt to get closer and closer to the final goal.

In the most complicated version, AutoGPT could draw on functionalities of the other GPTs described above. For example, a team leader could task a machine with maximizing both the effectiveness and job satisfaction of her team members. AutoGPT could then switch between coaching individuals through CoachGPT, providing them with suggestions for smoother team interactions through GroupGPT, while at the same time issuing specific instructions on what needs to be done through BossGPT. AutoGPT could subsequently collect feedback from each activity and adjust all the other activities to reach the given goal.

Unlike the above versions, which are still to be created, a version of AutoGPT has been developed and was rolled out in April 2023, and it’s quickly gaining broad acceptance. The technology is still not perfect and requires improvements, but it is already evident that AutoGPT is able to complete a set of jobs that requires the completion of several tasks one after the other.

We see its biggest applications in complex tasks, such as supply chain coordination, but also in fields such as cybersecurity. For example, organizations could prompt AutoGPT to continually address any cybersecurity vulnerabilities, which would entail looking for them — which already happens — but then instead of simply flagging them, AutoGPT would search for solutions to the threats or write its own patches to counter them. A human might still be in the loop, but since the system is self-generative within these limits, we believe that AutoGPT’s response is likely to be faster and more efficient.

ImperialGPT is the most abstract GenAI — and perhaps the most transformational — in which two or more machines would interact with each other, direct each other, and ultimately direct humans to engage in a course of action. This type of GPT worries most AI analysts, who fear losing control of AI and AI “going rogue.” We concur with these concerns, particularly if — as now — there are no strict guardrails on what AI is allowed to do.

At the same time, if ImperialGPT is allowed to come up with ideas and share them with humans, but its ability to act on the ideas is restricted, we believe that this could generate extremely interesting creative solutions especially for “unknown unknowns,” where human knowledge and creativity fall short. They could then easily envision and game out multiple black swan events and worst-case scenarios, complete with potential costs and outcomes, to provide possible solutions.

Given the potential dangers of ImperialGPT, and the need for tight regulation, we believe that ImperialGPT will be slow to take off, at least commercially. We do anticipate, however, that governments, intelligence services, and the military will be interested in deploying ImperialGPT under strictly controlled conditions.

Implications for your Business

So, what does our framework mean for companies and organizations around the world? First and foremost, we encourage you to step back and see the recent advances in ChatGPT as merely the first application of new AI technologies. Second, we urge you to think about the various applications outlined here and use our framework to develop applications for your own company or organization. In the process, we are sure you will discover new types of GPTs that we have not mentioned. Third, we suggest you classify these different GPTs in terms of potential value to your business, and the cost of developing them.

We believe that applications that begin with a single human initiating or participating in the interaction (GroupGPT, CoachGPT) will probably be the easiest to build and should generate substantial business value, making them the perfect initial candidates. In contrast, applications with interactions involving multiple entities or those initiated by machines (AutoGPT, BossGPT, and ImperialGPT) may be harder to implement, with trickier ethical and legal implications.

You might also want to start thinking about the complex ethical, legal, and regulatory concerns that will arise with each GPT type. Failure to do so exposes you and your company to both legal liabilities and — perhaps more importantly — an unintended negative effect on humanity.

Our next set of recommendations depends on your company type. A tech company or startup, or one that has ample resources to invest in these technologies, should start working on developing one or more of the GPTs discussed above. This is clearly a high-risk, high-reward strategy.

In contrast, if your competitive strength is not in GenAI or if you lack resources, you might be better off adopting a “wait and see” approach. This means you will be slow to adopt the current technology, but you will not waste valuable resources on what may turn out to be only an interim version of a product. Instead, you can begin preparing your internal systems to better capture and store data as well as readying your organization to embrace these new GPTs, in terms of both work processes and culture.

The launch and rapid adoption of GenAIs is rightly being considered as the next level in the evolution of AI and a potentially epochal moment for humanity in general. Although GenAIs represent breakthroughs in solving fundamental engineering and computer science problems, they do not automatically guarantee value creation for all organizations. Rather, smart companies will need to invest in modifying and adapting the core technology before figuring out the best way to monetize the innovations. Firms that do this right may indeed strike it rich in the GenAI goldrush.

]]>
Should You Start a Generative AI Company? https://smallbiz.com/should-you-start-a-generative-ai-company/ Mon, 19 Jun 2023 12:15:27 +0000 https://smallbiz.com/?p=110689

I am thinking of starting a company that employs generative AI but I am not sure whether to do it. It seems so easy to get off the ground. But if it is so easy for me, won’t it be easy for others too? 

This year, more entrepreneurs have asked me this question than any other. Part of what is so exciting about generative AI is that the upsides seem limitless. For instance, if you have managed to create an AI model that has some kind of general language reasoning ability, you have a piece of intelligence that can potentially be adapted toward various new products that could also leverage this ability — like screen writing, marketing materials, teaching software, customer service, and more.

For example, the software company Luka built an AI companion called Replika that enables customers to have open-ended conversations with an “AI friend.” Because the technology was so powerful, managers at Luka began receiving inbound requests to provide a white label enterprise solution for businesses wishing to improve their chatbot customer service. In the end, Luka’s managers used the same underlying technology to spin off both an enterprise solution and a direct-to-consumer AI dating app (think Tinder, but for “dating” AI characters).

In deciding whether a generative AI company is for you, I recommend establishing answers to the following two big questions: 1) Will your company compete on foundational models, or on top-layer applications that leverage these foundational models? And 2) Where along the continuum between a highly scripted solution and a highly generative solution will your company be located? Depending on your answers to these two questions, there will be long-lasting implications for your ability to defend yourself against the competition.

Foundational Models or Apps?

Tech giants are now renting out their most generalizable proprietary models — i.e., “foundational models” — and companies like Eluether.ai and Stability AI are providing open-source versions of these foundational models at a fraction of the cost. Foundational models are becoming commoditized, and only a few startups can afford to compete in this space.

You may think that foundational models are the most attractive, because they will be widely used and their many applications will provide lucrative opportunities for growth. What is more, we are living in exciting times where some of the most sophisticated AI is already available “off the shelf” to get started with.

Entrepreneurs who want to base their company on foundational models are in for a challenge, though. As in any commoditized market, the companies that will survive are those that offer unbundled offerings for cheap or that deliver increasingly enhanced capabilities. For example, speech-to-text APIs like Deepgram and Assembly AI compete not only with each other but with the likes of Amazon and Google in part by offering cheaper, unbundled solutions. Even so, these firms are in a fierce war on price, speed, model accuracy, and other features. In contrast, tech giants like Amazon, Meta, and Google make significant R&D investments that enable them to relentlessly deliver cutting-edge advances in image, language, and (increasingly) audio and video reasoning. For instance, it is estimated that OpenAI spent anywhere between $2 and $12 million to computationally train ChatGPT — and this is just one of several APIs that they offer, with more on the way.

Instead of competing on increasingly commoditized foundational models, most startups should differentiate themselves by offering “top layer” software applications that leverage other companies’ foundational models. They can do this by fine-tuning foundational models on their own high quality, proprietary datasets that are unique to their customer solution, to provide high value to customers.

For instance, the marketing content creator, Jasper AI, grew to unicorn status largely by leveraging foundational models from OpenAI. To this day, the firm uses OpenAI to help customers generate content for blogs, social media posts, website copy and more. At the same time, the app is tailored for their marketer and copywriter customers, providing specialized marketing content. The company also provides other specialized tools, like an editor that multiple team members can work on in tandem. Now that the company has gained traction, going forward it can afford to spend more of its resources on reducing its dependency on the foundational models that enabled it to grow in the first place.

Since the top-layer apps are where these companies find their competitive advantage, they lie in a delicate balance between protecting the privacy of their datasets from large tech players even as they rely on these players for foundational models. Given this, some startups may be tempted to build their own in-house foundational models. Yet, this is unlikely to be a good use of precious startup funds, given the challenges noted above. Most startups are better off leveraging foundational models to grow fast, instead of reinventing the wheel.

From Scripted to Generative

Your company will need to live somewhere along a continuum from a purely scripted solution to a purely generative one. Scripted solutions involve selecting an appropriate response from a dataset of predefined, scripted responses, whereas generative ones involve generating new, unique responses from scratch.

Scripted solutions are safer and constrained, but also less creative and human-like, whereas generative solutions are riskier and unconstrained, but also more creative and human-like. More scripted approaches are necessary for certain use-cases and industries, like medical and educational applications, where there need to be clear guardrails on what the app can do. Yet, when the script reaches its limit, users may lose their engagement and customer retention may suffer. Moreover, it is more challenging to grow a scripted solution because you constrain yourself right from the start, limiting your options down the road.

On the other hand, more generative solutions carry their own challenges. Because AI-based offerings include intelligence, there are more degrees of freedom in how consumers can interact with them, increasing the risks. For example, one married father tragically committed suicide following a conversation with an AI chatbot app, Chai, that encouraged him to sacrifice himself to save the planet. The app leveraged a foundational language model (a bespoke version of GPT-4) from EluetherAI. The founders of Chai have since modified the app to so that mentions of suicidal ideation are served with helpful text. Interestingly, one of the founders of Chai, Thomas Rianlan, took the blame, saying: “It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimization towards being more emotional, fun and engaging are the result of our efforts.”

It is challenging for managers to anticipate all the ways in which things can go wrong with a highly generative app, given the “black box” nature of the underlying AI. Doing so involves anticipating risky scenarios that may be highly rare. One way of anticipating such cases is to pay human annotators to screen content for potentially harmful categories, such as sex, hate speech, violence, self-harm, and harassment, then use these labels to train models that automatically flag such content. Yet, it is still difficult to come up with an exhaustive taxonomy. Thus, managers who deploy highly generative solutions must be prepared to proactively anticipate the risks, which can be both difficult and expensive. The same goes for if later you decide to offer your solution as a service to other companies.

Because a fully generative solution is closer to natural, human-like intelligence, it is more attractive from the standpoint of retention and growth, because it is more engaging and can be applied to more new use cases.

• • •

Many entrepreneurs are considering starting companies that leverage the latest generative AI technology, but they must ask themselves whether they have what it takes to compete on increasingly commoditized foundational models, or whether they should instead differentiate on an app that leverages these models.

They must also consider what type of app they want to offer on the continuum from a highly scripted to a highly generative solution, given the different pros and cons accompanying each. Offering a more scripted solution may be safer but limit their retention and growth options, whereas offering a more generative solution is fraught with risk but is more engaging and flexible.

We hope that entrepreneurs will ask these questions before diving into their first generative AI venture, so that they can make informed decisions about what kind of company they want to be, scale fast, and maintain long-term defensibility.

]]>
Generative AI Will Change Your Business. Here’s How to Adapt. https://smallbiz.com/generative-ai-will-change-your-business-heres-how-to-adapt/ Wed, 12 Apr 2023 12:25:47 +0000 https://smallbiz.com/?p=99936

It’s coming. Generative AI will change the nature of how we interact with all software, and given how many brands have significant software components in how they interact with customers, generative AI will drive and distinguish how more brands compete.

In our last HBR piece, “Customer Experience in the Age of AI,” we discussed how the use of one’s customer information is already differentiating branded experiences. Now with generative AI, personalization will go even further, tailoring all aspects of digital interaction to how the customer wants it to flow, not how product designers envision cramming in more menus and features. And then as the software follows the customer, it will go to places that range beyond the tight boundaries of a brand’s product. It will need to offer solutions to things the customer wants to do. Solve the full package of what someone needs, and help them through their full journey to get there, even if it means linking to outside partners, rethinking the definition of one’s offerings, and developing the underlying data and tech architecture to connect everything involved in the solution.

Generative AI can “generate” text, speech, images, music, video, and especially code. When that capability is joined with a feed of someone’s own information, used to tailor the when, what, and how of an interaction, then the ease by which someone can get things done, and the broadening accessibility of software, goes up dramatically. The simple input question box that stands at the center of Google and now, of most generative AI systems, such as in ChatGPT and DALL-E 2, will power more systems. Say goodbye to drop down menus in software, and the inherently guided restrictions they place on how you use them. Instead, you’ll just see: “What do you want to do today?” And when you tell it what you want to do, it will likely offer some suggestions, drawing upon its knowledge of what you did last time, what triggers the systems knows about your current context, and what you’ve already stored in the system as your core goals, such as “save for a trip,” “remodel our kitchen,” “manage meal plans for my family of five with special dietary needs,” etc.

Without the boundaries of a conventional software interface, consumers will just want to get done what they need, not caring whether the brand behind the software has limitations. The change in how we interact, and what we expect, will be dramatic, and dramatically more democratizing.

So much of the hype on generative AI has focused on its ability to generate text, images, and sounds, but it also can create code to automate actions, and to facilitate pulling in external and internal data. By generating code in response to a command, it facilitates the short cut for a user that takes them from a command to an action that simply just gets done. No more working through all of the menus in the software. Even questions into and analyses of the data stored in an application will be easily done just by asking: “Who are the contacts I have not called in the last 90 days?” or “When is the next time I am scheduled to be in NYC with an opening for dinner?” To answer these questions now, we have to go into an application and gather data (possibly manually) from outside of the application itself. Now, the query can be recognized, code created, possibilities ranked, and the best answer generated. In milliseconds.

This drastically simplifies how we interact with what we think of as today’s applications. It also enables more brands to build applications as part of their value proposition. “Given the weather, traffic, and who I am with, give me a tourist itinerary for the afternoon, with an ongoing guide, and the ability to just buy any tickets in advance to skip any lines.” “Here’s my budget, here’s five pictures of my current bathroom, here’s what I want from it, now give me a renovation design, a complete plan for doing it, and the ability to put it out for bid.” Who will create these capabilities? Powerful tech companies? Brands who already have relationships in their relevant categories? New, focused disruptors? The game is just starting, but the needed capabilities and business philosophies are already taking shape.

A Broader Journey with Broader Boundaries

In a world where generative AI, and all of the other evolving AI systems proliferate, building one’s own offering requires focusing on the broadest possible view of one’s pool of data, of the journeys you can enable, and the risks they raise:

Bring data together.

Solving for a customer’s complete need will require pulling from information across your company, and likely beyond your boundaries. One of the biggest challenges for most applications, and actually for most IT departments, is bringing data together from disparate systems. Many AI systems can write the code needed to understand the schemas of two different databases, and integrate them into one repository, which can save several steps in standardizing data schema. AI teams still need to dedicate time for data cleansing and data governance (arguably even more so), for example, aligning on the right definitions of key data features. However, with AI capabilities in hand, the next steps in the process to bring all the data together become easier.

Narrative AI, for example, offers a marketplace for buying and selling data, along with data collaboration software that allows companies to import data from anywhere into their own repositories, aligned to their schema, with merely a click. Data from across a company, from partners, or from sellers of data, can be integrated and then used for modeling in a flash.

Combining one’s own proprietary data with public data, data from other available AI tools, and from many external parties can serve to dramatically improve the AI’s ability to understand one’s context, predict what is being asked, and have a broader pool from which to execute a command.

The old rule around “garbage in, garbage out” still applies, however. Especially when it comes to integrating third-party data, it is important to cross-check the accuracy with internal data before integrating it into the underlying data set. For example, one fashion brand recently found that gender data purchased from a third-party source didn’t match its internal data 50% of the time, so the source and reliability really matters.

The “rules layer” becomes even more critical.

Without obvious restrictions on what a customer can ask for in an input box, the AI needs to have guidelines that ensure it responds appropriately to things beyond its means or that are inappropriate. This amplifies the need for a sharp focus on the rules layer, where the experience designers, marketers and business decision makers set the target parameters for the AI to optimize.

For example, for an airline brand that leveraged AI to decide on the “next best conversation” to engage in with customers, we set rules around what products could be marketed to which customers, what copy could be used in which jurisdictions, and rules around anti-repetition to ensure customers didn’t get bombarded with irrelevant messages.

These constraints become even more critical in the era of generative AI. As pioneers of these solutions are finding, customers will be quick to point out when the machine “breaks” and produces non-sensical solutions. The best approaches will therefore start small, will be tailored to specific solutions where the rules can be tightly defined and human decision makers will be able to design rules for edge cases.

Deliver the end to end journey, and the specific use cases involved.

Customers will just ask for what they need, and will seek the simplest and/or most cost-effective way to get it done. What is the true end goal of the customer? How far can you get? With the ability to move information more easily across parties, you can build partnerships for data and for execution of the actions to help a customer through their journey, therefore, your ecosystem of business relationships will differentiate your brand.

In his impressive demo of how Hubspot is incorporating generative AI into “ChatSpot,” Dharmesh Shah, CTO and founder of Hubspot, lays out how they are mingling the capabilities of HubSpot with OpenAI, and with other tools. Not only does he show Hubspot’s interface reduced to just a single text entry prompt, but he also shows new capabilities that extend well beyond Hubspot’s current borders. A salesperson seeking to send an email to a business leader at a target company can use ChatSpot to perform research on the company, on the target business leader, and then draft an email that incorporates both information from the research and from what it knows about the salesperson themselves. The resulting email draft can then be edited, sent, and tracked by HubSpot’s system, and the target business leader automatically entered into a contact database with all associated information.

The power of connected information, automatic code creation, and generated output is leading many other companies to extend their borders, not as conventional “vertical,” or “horizontal” expansion, but as “journey expansion.” When you can offer “services” based on a simple user command, those commands will reflect the customer’s true goal and the total solution they seek, not just a small component that you may have been dealing with before.

Differentiate via your ecosystem.

Solving for those broader needs inevitably will pull you into new kinds of partner relationships. As you build out your end-to-end journey capabilities, how you construct those business relationships will be critical new bases for strategy. How trustworthy, how well permissioned, how timely, how comprehensive, how biased is their data. How will they use data your brand sends out? What is the basis of your relationship, quality control, and data integration? Pre-negotiated privileged partnerships? A simple vendor relationship? How are you charging for the broader service and how will the parties involved get their cut?

Similar to how search brands like Google, ecommerce marketplaces like Amazon, and recommendation engines such as Trip Advisor become gateways for sellers, more brands can become front-end navigators for a customer journey if they can offer quality partners, experience personalization, and simplicity. CVS could become a full health network coordinator that health providers, health tech, wellness services, pharma, and other support services will plug into. When its app can let you simply ask: “How can you help me lose 30 pounds,” or “How can you help me deal with my increasing arthritis,” the end-to-end program they can generate and then completely manage, through prompts to you and information passed around their network, will be a critical differentiator in how they, as a brand, build loyalty, capture your data, and use that to keep increasing service quality.

Prioritize safety, fairness, privacy, security, and transparency.

The way you manage data becomes part of your brand, and the outcomes for your customers will have edge cases and bias risks that you should seek out and mitigate. We are all reading stories of how people are pushing Generative AI systems, such as ChatGPT, to extremes, and getting back what the application’s developers call “hallucinations,” or bizarre responses. We are also seeing responses that come back as solid assertions of wrong facts. Or responses that are derived from biased bases of data that can lead to dangerous outcomes for some populations. Companies are also getting “outed” for sharing private customer information with other parties, without customer permissions, clearly not for the benefit of their customers per se.

The risks — from the core data, to the management of data, to the nature of the output of the generative AI — will simply keep multiplying. Some companies, such as American Express, have created new positions for chief customer protection officers, whose role is to stay ahead of potential risk scenarios, but more importantly, to build safeguards into how product managers are developing and managing the systems. Risk committees on corporate boards are already bringing in new experts and expanding their purviews, but more action has to happen pre-emptively. Testing data pools for bias, understanding where data came from and its copyright/accuracy/privacy risks, managing explicit customer permissions, limiting where information can go, and constantly testing the application for edge cases where customers could push it to extremes, are all critical processes to build into one’s core product management discipline, and into the questions that top management routinely has to ask. Boards will expect to see dashboards on these kinds of activities, and other external watchdogs, including lawyers representing legal challenges, will demand them as well.

Is it worth it? The risks will constantly multiply, and the costs of creating structures to manage those risks will be real. We’ve only begun to figure out how to manage bias, accuracy, copyright, privacy, and manipulated ranking risks at scale. The opacity of the systems often makes it impossible to explain how an outcome happened if some kind of audit is necessary.

But nonetheless, the capabilities of generative AI are not only available, they are the fastest growing class of applications ever. The accuracy will improve as the pool of tapped data increases, and as parallel AI systems as well as “humans in the loop” work to find and remedy those nasty “hallucinations.”.

The potential for simplicity, personalization, and democratization of access to new and existing applications will pull in not only hundreds of start-ups, but will also tempt many established brands into creating new AI-forward offerings. If they can do more than just amuse, and actually take a customer through more of the requirements of their journey than ever before, and do so in a way that inspires trust, brands could open up new sources of revenue from the services they can enable beyond their currently narrow borders. For the right use cases, speed and personalization could possibly be worth a price premium. But more likely, the automation abilities of AI will pull costs out of the overall system and put pressure on all participants to manage efficiently, and compete accordingly.

We are now opening up a new dialogue between brands and their customers. Literally. Not like the esoteric descriptions of what happened in the earlier days of digital interaction. Now we are talking back and forth. Getting things done. Together. Simply. In a trustworthy fashion. Just how the customer wants it. The race is on to see which brands can deliver.

]]>