Data management | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com INCORPORATE your small business, form a corporation, LLC or S Corp. The SmallBiz network can help with all your small business needs! Mon, 26 Jun 2023 12:13:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://smallbiz.com/wp-content/uploads/2021/05/cropped-biz_icon-32x32.png Data management | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com 32 32 What Roles Could Generative AI Play on Your Team? https://smallbiz.com/what-roles-could-generative-ai-play-on-your-team/ Thu, 22 Jun 2023 12:15:19 +0000 https://smallbiz.com/?p=111073

The frenzy surrounding the launch of Large Language Models (LLMs) and other types of Generative AI (GenAI) isn’t going to fade anytime soon. Users of GenAI are discovering and recommending new and interesting use cases for their business and personal lives. Many recommendations start with the assumption that GenAI requires a human prompt. Indeed, Time magazine recently proclaimed “prompt engineering” to be the next hot job, with salaries reaching $335,000. Tech forums and educational websites are focusing on prompt engineering, with Udemy already offering a course on the topic, and several organizations we work with are now beginning to invest considerable resources in training employees on how best to use ChatGPT.

However, it may be worth pausing to consider other ways of interacting with GPT technologies, which are likely to emerge soon. We present an intuitive way to think about this issue, which is based on our own survey of GenAI developments, combined with conversations with companies that are seeking to develop some versions of these.

A Framework of GPT Interactions

A good starting point is to distinguish between who is involved in the interaction — individuals, groups of people, or another machine — and who starts the interaction, human or machine. This leads to six different types of GenAI uses, shown below. ChatGPT, where one human initiates interaction with the machine is already well-known. We now describe each of the other GPTs and outline their potential.

CoachGPT is a personal assistant that provides you with a set of suggestions on managing your daily life. It would base these suggestions not on explicit prompts from you, but on the basis of observing what you do and your environment. For example, it could observe you as an executive and note that you find it hard to build trust in your team; it could then recommend precise actions to overcome this blind spot. It could also come up with personalized advice on development options or even salary negotiations.

CoachGPT would subsequently see which recommendations you adopted or didn’t adopt, and which benefited you and which ones didn’t to improve its advice over time. With time, you would get a highly personalized AI advisor, coach, or consultant.

Organizations could adopt CoachGPT to advise customers on how to use a product, whether a construction company offering CoachGPT to advise end users on how best to use its equipment, or an accounting firm proffering real-time advice on how best to account for a set of transactions.

To make CoachGPT effective, individuals and organizations would have to allow it to work in the background, monitoring online and offline activities. Clearly, serious privacy considerations need to be addressed before we entrust our innermost thoughts to the system. However, the potential for positive outcomes in both private and professional lives is immense.

GroupGPT would be a bona fide group member that can observe interactions between group members and contribute to the discussion. For example, it could conduct fact checking, supply a summary of the conversation, suggest what to discuss next, play the role of devil’s advocate, provide a competitor perspective, stress-test the ideas, or even propose a creative solution to the problem at hand.

The requests could come from individual group members or from the team’s boss, who need not participate in team interactions, but merely seeks to manage, motivate, and evaluate group members. The contribution could be delivered to the whole group or to specific individuals, with adjustments for that person’s role, skill, or personality.

The privacy concerns mentioned above also apply to GroupGPT, but, if addressed, organizations could take advantage of GroupGPT by using it for project management, especially on long and complicated projects involving relatively large teams across different departments or regions. Since GroupGPT would overcome human limitations on information storage and processing capacity, it would be ideal for supporting complex and dispersed teams.

BossGPT takes an active role in advising a group of people on what they could or should do, without being prompted. It could provide individual recommendations to group members, but its real value emerges when it begins to coordinate the work of group members, telling them as a group who should do what to maximize team output. BossGPT could also step in to offer individual coaching and further recommendations as the project and team dynamics evolve.

The algorithms necessary for BossGPT to work would be much more complicated as they would have to consider somewhat unpredictable individual and group reactions to instructions from a machine, but it could have a wide range of uses. For example: an executive changing job could request a copy of her reactions to her first organization’s BossGPT instructions, which could then be used to assess how she would fit into the new organization — and the new organization-specific BossGPT.

At the organizational level companies could deploy BossGPT to manage people, thereby augmenting — or potentially even replacing — existing managers. Similarly, BossGPT has tremendous applications in coordinating work across organizations and managing complex supply chains or multiple suppliers.

Companies could turn BossGPT into a product, offering their customers AI solutions to help them manage their business. These solutions could be natural extensions of the CoachGPT examples described earlier. For example, a company selling construction equipment could offer BossGPT to coordinate many end users on a construction site, and an accounting firm could provide it to coordinate the work of many employees of its customers to run the accounting function in the most efficient way.

AutoGPT entails a human giving a request or prompt to one machine, which in turn engages other machines to complete the task. In its simplest form, a human might instruct a machine to complete a task, but the machine realizes that it lacks a specific software to execute it, so it would search for the missing software on Google before downloading and installing it, and then using it to finish the request.

In a more complicated version, humans could give AutoGPT a goal (such as creating the best viral YouTube video) and instruct it to interact with another GenAI to iteratively come up with the best ChatGPT prompt to achieve the goal. The machine would then launch the process by proposing a prompt to another machine, then evaluate the outcome, and adjust the prompt to get closer and closer to the final goal.

In the most complicated version, AutoGPT could draw on functionalities of the other GPTs described above. For example, a team leader could task a machine with maximizing both the effectiveness and job satisfaction of her team members. AutoGPT could then switch between coaching individuals through CoachGPT, providing them with suggestions for smoother team interactions through GroupGPT, while at the same time issuing specific instructions on what needs to be done through BossGPT. AutoGPT could subsequently collect feedback from each activity and adjust all the other activities to reach the given goal.

Unlike the above versions, which are still to be created, a version of AutoGPT has been developed and was rolled out in April 2023, and it’s quickly gaining broad acceptance. The technology is still not perfect and requires improvements, but it is already evident that AutoGPT is able to complete a set of jobs that requires the completion of several tasks one after the other.

We see its biggest applications in complex tasks, such as supply chain coordination, but also in fields such as cybersecurity. For example, organizations could prompt AutoGPT to continually address any cybersecurity vulnerabilities, which would entail looking for them — which already happens — but then instead of simply flagging them, AutoGPT would search for solutions to the threats or write its own patches to counter them. A human might still be in the loop, but since the system is self-generative within these limits, we believe that AutoGPT’s response is likely to be faster and more efficient.

ImperialGPT is the most abstract GenAI — and perhaps the most transformational — in which two or more machines would interact with each other, direct each other, and ultimately direct humans to engage in a course of action. This type of GPT worries most AI analysts, who fear losing control of AI and AI “going rogue.” We concur with these concerns, particularly if — as now — there are no strict guardrails on what AI is allowed to do.

At the same time, if ImperialGPT is allowed to come up with ideas and share them with humans, but its ability to act on the ideas is restricted, we believe that this could generate extremely interesting creative solutions especially for “unknown unknowns,” where human knowledge and creativity fall short. They could then easily envision and game out multiple black swan events and worst-case scenarios, complete with potential costs and outcomes, to provide possible solutions.

Given the potential dangers of ImperialGPT, and the need for tight regulation, we believe that ImperialGPT will be slow to take off, at least commercially. We do anticipate, however, that governments, intelligence services, and the military will be interested in deploying ImperialGPT under strictly controlled conditions.

Implications for your Business

So, what does our framework mean for companies and organizations around the world? First and foremost, we encourage you to step back and see the recent advances in ChatGPT as merely the first application of new AI technologies. Second, we urge you to think about the various applications outlined here and use our framework to develop applications for your own company or organization. In the process, we are sure you will discover new types of GPTs that we have not mentioned. Third, we suggest you classify these different GPTs in terms of potential value to your business, and the cost of developing them.

We believe that applications that begin with a single human initiating or participating in the interaction (GroupGPT, CoachGPT) will probably be the easiest to build and should generate substantial business value, making them the perfect initial candidates. In contrast, applications with interactions involving multiple entities or those initiated by machines (AutoGPT, BossGPT, and ImperialGPT) may be harder to implement, with trickier ethical and legal implications.

You might also want to start thinking about the complex ethical, legal, and regulatory concerns that will arise with each GPT type. Failure to do so exposes you and your company to both legal liabilities and — perhaps more importantly — an unintended negative effect on humanity.

Our next set of recommendations depends on your company type. A tech company or startup, or one that has ample resources to invest in these technologies, should start working on developing one or more of the GPTs discussed above. This is clearly a high-risk, high-reward strategy.

In contrast, if your competitive strength is not in GenAI or if you lack resources, you might be better off adopting a “wait and see” approach. This means you will be slow to adopt the current technology, but you will not waste valuable resources on what may turn out to be only an interim version of a product. Instead, you can begin preparing your internal systems to better capture and store data as well as readying your organization to embrace these new GPTs, in terms of both work processes and culture.

The launch and rapid adoption of GenAIs is rightly being considered as the next level in the evolution of AI and a potentially epochal moment for humanity in general. Although GenAIs represent breakthroughs in solving fundamental engineering and computer science problems, they do not automatically guarantee value creation for all organizations. Rather, smart companies will need to invest in modifying and adapting the core technology before figuring out the best way to monetize the innovations. Firms that do this right may indeed strike it rich in the GenAI goldrush.

]]>
Should You Start a Generative AI Company? https://smallbiz.com/should-you-start-a-generative-ai-company/ Mon, 19 Jun 2023 12:15:27 +0000 https://smallbiz.com/?p=110689

I am thinking of starting a company that employs generative AI but I am not sure whether to do it. It seems so easy to get off the ground. But if it is so easy for me, won’t it be easy for others too? 

This year, more entrepreneurs have asked me this question than any other. Part of what is so exciting about generative AI is that the upsides seem limitless. For instance, if you have managed to create an AI model that has some kind of general language reasoning ability, you have a piece of intelligence that can potentially be adapted toward various new products that could also leverage this ability — like screen writing, marketing materials, teaching software, customer service, and more.

For example, the software company Luka built an AI companion called Replika that enables customers to have open-ended conversations with an “AI friend.” Because the technology was so powerful, managers at Luka began receiving inbound requests to provide a white label enterprise solution for businesses wishing to improve their chatbot customer service. In the end, Luka’s managers used the same underlying technology to spin off both an enterprise solution and a direct-to-consumer AI dating app (think Tinder, but for “dating” AI characters).

In deciding whether a generative AI company is for you, I recommend establishing answers to the following two big questions: 1) Will your company compete on foundational models, or on top-layer applications that leverage these foundational models? And 2) Where along the continuum between a highly scripted solution and a highly generative solution will your company be located? Depending on your answers to these two questions, there will be long-lasting implications for your ability to defend yourself against the competition.

Foundational Models or Apps?

Tech giants are now renting out their most generalizable proprietary models — i.e., “foundational models” — and companies like Eluether.ai and Stability AI are providing open-source versions of these foundational models at a fraction of the cost. Foundational models are becoming commoditized, and only a few startups can afford to compete in this space.

You may think that foundational models are the most attractive, because they will be widely used and their many applications will provide lucrative opportunities for growth. What is more, we are living in exciting times where some of the most sophisticated AI is already available “off the shelf” to get started with.

Entrepreneurs who want to base their company on foundational models are in for a challenge, though. As in any commoditized market, the companies that will survive are those that offer unbundled offerings for cheap or that deliver increasingly enhanced capabilities. For example, speech-to-text APIs like Deepgram and Assembly AI compete not only with each other but with the likes of Amazon and Google in part by offering cheaper, unbundled solutions. Even so, these firms are in a fierce war on price, speed, model accuracy, and other features. In contrast, tech giants like Amazon, Meta, and Google make significant R&D investments that enable them to relentlessly deliver cutting-edge advances in image, language, and (increasingly) audio and video reasoning. For instance, it is estimated that OpenAI spent anywhere between $2 and $12 million to computationally train ChatGPT — and this is just one of several APIs that they offer, with more on the way.

Instead of competing on increasingly commoditized foundational models, most startups should differentiate themselves by offering “top layer” software applications that leverage other companies’ foundational models. They can do this by fine-tuning foundational models on their own high quality, proprietary datasets that are unique to their customer solution, to provide high value to customers.

For instance, the marketing content creator, Jasper AI, grew to unicorn status largely by leveraging foundational models from OpenAI. To this day, the firm uses OpenAI to help customers generate content for blogs, social media posts, website copy and more. At the same time, the app is tailored for their marketer and copywriter customers, providing specialized marketing content. The company also provides other specialized tools, like an editor that multiple team members can work on in tandem. Now that the company has gained traction, going forward it can afford to spend more of its resources on reducing its dependency on the foundational models that enabled it to grow in the first place.

Since the top-layer apps are where these companies find their competitive advantage, they lie in a delicate balance between protecting the privacy of their datasets from large tech players even as they rely on these players for foundational models. Given this, some startups may be tempted to build their own in-house foundational models. Yet, this is unlikely to be a good use of precious startup funds, given the challenges noted above. Most startups are better off leveraging foundational models to grow fast, instead of reinventing the wheel.

From Scripted to Generative

Your company will need to live somewhere along a continuum from a purely scripted solution to a purely generative one. Scripted solutions involve selecting an appropriate response from a dataset of predefined, scripted responses, whereas generative ones involve generating new, unique responses from scratch.

Scripted solutions are safer and constrained, but also less creative and human-like, whereas generative solutions are riskier and unconstrained, but also more creative and human-like. More scripted approaches are necessary for certain use-cases and industries, like medical and educational applications, where there need to be clear guardrails on what the app can do. Yet, when the script reaches its limit, users may lose their engagement and customer retention may suffer. Moreover, it is more challenging to grow a scripted solution because you constrain yourself right from the start, limiting your options down the road.

On the other hand, more generative solutions carry their own challenges. Because AI-based offerings include intelligence, there are more degrees of freedom in how consumers can interact with them, increasing the risks. For example, one married father tragically committed suicide following a conversation with an AI chatbot app, Chai, that encouraged him to sacrifice himself to save the planet. The app leveraged a foundational language model (a bespoke version of GPT-4) from EluetherAI. The founders of Chai have since modified the app to so that mentions of suicidal ideation are served with helpful text. Interestingly, one of the founders of Chai, Thomas Rianlan, took the blame, saying: “It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimization towards being more emotional, fun and engaging are the result of our efforts.”

It is challenging for managers to anticipate all the ways in which things can go wrong with a highly generative app, given the “black box” nature of the underlying AI. Doing so involves anticipating risky scenarios that may be highly rare. One way of anticipating such cases is to pay human annotators to screen content for potentially harmful categories, such as sex, hate speech, violence, self-harm, and harassment, then use these labels to train models that automatically flag such content. Yet, it is still difficult to come up with an exhaustive taxonomy. Thus, managers who deploy highly generative solutions must be prepared to proactively anticipate the risks, which can be both difficult and expensive. The same goes for if later you decide to offer your solution as a service to other companies.

Because a fully generative solution is closer to natural, human-like intelligence, it is more attractive from the standpoint of retention and growth, because it is more engaging and can be applied to more new use cases.

• • •

Many entrepreneurs are considering starting companies that leverage the latest generative AI technology, but they must ask themselves whether they have what it takes to compete on increasingly commoditized foundational models, or whether they should instead differentiate on an app that leverages these models.

They must also consider what type of app they want to offer on the continuum from a highly scripted to a highly generative solution, given the different pros and cons accompanying each. Offering a more scripted solution may be safer but limit their retention and growth options, whereas offering a more generative solution is fraught with risk but is more engaging and flexible.

We hope that entrepreneurs will ask these questions before diving into their first generative AI venture, so that they can make informed decisions about what kind of company they want to be, scale fast, and maintain long-term defensibility.

]]>
When to Give Employees Access to Data and Analytics https://smallbiz.com/when-to-give-employees-access-to-data-and-analytics/ Wed, 24 May 2023 12:25:15 +0000 https://smallbiz.com/?p=107354

As business leaders strive to get the most out of their analytics investments, democratized data science often appears to offer the perfect solution. Using analytics software with no-code and low-code tools can put data science techniques into virtually anyone’s hands. In the best scenarios, this leads to better decision making and greater self-reliance and self-service in data analysis — particularly as demand for data scientists far outstrips their supply. Add to that reduced talent costs (with fewer high-cost data scientists) and more scalable customization to tailor analysis to a particular business need and context.

However, amid all the discussion around whether and how to democratize data science and analytics, a crucial point has been overlooked. The conversation needs to define when to democratize data and analytics, even to the point of redefining what democratization should mean.

Fully democratized data science and analytics presents many risks. As Reid Blackman and Tamara Sipes wrote in a recent article, data science is difficult and an untrained “expert” cannot necessarily solve hard problems, even with good software. The ease of clicking a button that produces results provides no assurance that the answer is good — in fact, it could be very flawed and only a trained data scientist would know.

It’s Only a Matter of Time

Even with these reservations, however, democratization of data science is here to stay, as evidenced by the proliferation of software and analytics tools. Thomas Redman and Thomas Davenport are among those who advocate for the development of “citizen data scientists,” even screening for basic data science skills and aptitudes in every position hired.

Democratization of data science, however, should not be taken to the extreme. Analytics need not be at everyone’s fingertips for an organization to flourish. How many outrageously talented people wouldn’t be hired simply because they lack “basic data science skills?” It’s unrealistic and overly limiting.

As business leaders look to democratize data and analysis within their organizations, the real question they should be asking is “when” it makes the most sense. This starts by acknowledging that not every “citizen” in an organization is comparably skilled to be a citizen data scientist. As Nick Elprin, CEO and co-founder of Domino Data Labs, which provides data science and machine learning tools to organizations, told me in a recent conversation, “As soon as you get into modeling, more complicated statistical issues are often lurking under the surface.”

The Challenge of Data Democratization

Consider a grocery chain that recently used advanced predictive methods to right-size its demand planning, in an attempt to avoid having too much inventory (resulting in spoilage) or too little (resulting in lost sales). The losses due to spoilage and stockouts were not enormous, but the problem of curtailing them was very hard to solve — given all the variables of demand, seasonality, and consumer behaviors. The complexity of the problem meant that the grocery chain could not leave it to citizen data scientists to figure it out, but rather leverage a team of bona fide, well-trained, data scientists.

Data citizenry requires a “representative democracy,” as Elprin and I discussed. Just as U.S. citizens elect politicians to represent them in Congress (presumably to act in their best interests in legislative matters), so too organizations need the right representation by data scientists and analysts to weigh in on issues that others simply don’t have the expertise to address.

In short, it’s knowing when and to what degree to democratize data. I suggest the following five criteria:

Think about the “citizen’s” skill level: The citizen data scientist, in some shape and form, is here to stay. As stated earlier, there simply aren’t enough data scientists to go around, and using this scarce talent to address every data issue isn’t sustainable. More to the point, democratization of data is key to inculcating analytical thinking across the organization. A well-recognized example is Coca-Cola, which has rolled out a digital academy to train managers and team leaders, producing graduates of the program who are credited with about 20 digital, automation, and analytics initiatives at several sites in the company’s manufacturing operations.

However, when it comes to engaging in predictive modeling and advanced data analysis that could fundamentally change a company’s operations, it’s crucial to consider the skill level of the “citizen.” A sophisticated tool in the hands of a data scientist is additive and valuable; the same tool in the hands of someone who is merely “playing around in data” can lead to errors, incorrect assumptions, questionable results, and misinterpretation of outcomes and conclusions.

Measure the importance of the problem: The more important a problem is to the company, the more imperative it is to have an expert handling the data analysis. For example, generating a simple graphic of historical purchasing trends can probably be accomplished by someone with a dashboard that displays data in a visually appealing form. But a strategic decision that has meaningful impact on a company’s operations requires expertise and reliable accuracy. For example, how much an insurance company should charge for a policy is so deeply foundational to the business model itself that it would be unwise to relegate this task to a non-expert.

Determine the problem’s complexity: Solving complex problems is beyond the capacity of the typical citizen data scientist. Consider the difference between comparing customer satisfaction scores across customer segments (simple, well-defined metrics and lower-risk) versus using deep learning to detect cancer in a patient (complex and high-risk). Such complexity cannot be left to a non-expert making cavalier decisions — and potentially the wrong decisions. When complexity and stakes are low, democratizing data makes sense.

An example is a Fortune 500 company I work with, which runs on data throughout its operations. A few years ago, I ran a training program in which more than 4,500 managers were divided into small teams, each of which was asked to articulate an important business problem that could be solved with analytics. Teams were empowered to solve simple problems with available software tools, but most problems surfaced precisely because they were difficult to solve. Importantly, these managers were not charged with actually solving those difficult problems, but rather collaborating with the data science team. Notably, these 1,000 teams identified no less than 1,000 business opportunities and 1,000 ways that analytics could help the organization.

Empower those with domain expertise: If a company is seeking some “directional” insights — customer X is more likely to buy a product than customer Y — then democratization of data and some lower-level citizen data science will probably suffice. In fact, tackling these types of lower-level analyses can be a great way to empower those with domain expertise (i.e., being closest to the customers) with some simplified data tools. Greater precision (such as with high-stakes and complex issues) requires expertise.

The most compelling case for precision is when there are high-stakes decisions to be made based on some threshold. If an aggressive cancer treatment plan with significant side effects were to be undertaken at, for instance, greater than 30% likelihood of cancer, it would be important to differentiate between 29.9% and 30.1%. Precision matters — especially in medicine, clinical operations, technical operations, and for financial institutions that navigate markets and risk, often to capture very small margins at scale.

Challenge experts to scout for bias: Advanced analytics and AI can easily lead to decisions that are considered “biased.”  This is challenging in part because the point of analytics is to discriminate — that is, to base choices and decisions on certain variables. (Send this offer to this older male, but not to this younger female because we think they will exhibit different purchasing behaviors in response.) The big question, therefore, is when such discrimination is actually acceptable and even good — and when it is inherently problematic, unfair, and dangerous to a company’s reputation.

Consider the example of Goldman Sachs, which was accused of discriminating by offering less credit on an Apple credit card to women than to men. In response, Goldman Sachs said it did not use gender in its model, only factors such as credit history and income. However, one could argue that credit history and income are correlated to gender and using those variables punishes women who tend to make less money on average and historically have had less opportunity to build credit. When using output that discriminates, decision-makers and data professionals alike need to understand how the data were generated and the interconnectedness of the data, as well as how to measure such things as differential treatment and much more. A company should never put its reputation on the line by having a citizen data scientist alone determine whether a model is biased.

Democratizing data has its merits, but it comes with challenges. Giving the keys to everyone doesn’t make them an expert, and gathering the wrong insights can be catastrophic. New software tools can allow everyone to use data, but don’t mistake that widespread access for genuine expertise.

]]>
Infusing Digital Responsibility into Your Organization https://smallbiz.com/infusing-digital-responsibility-into-your-organization/ Fri, 28 Apr 2023 12:25:36 +0000 https://smallbiz.com/?p=103647

In 2018, Rick Smith, founder and CEO of Axon, the Scottsdale, Arizona-based manufacturer of Taser weapons and body cameras, became concerned that advances in technology were creating new and challenging ethical issues. So, he set up an independent AI ethics board made up of ethicists, AI experts, public policy specialists, and representatives of law enforcement to provide recommendations to Axon’s management. In 2019, the board recommended against adding facial recognition technology to the company’s line of body cameras, and in 2020, it provided guidelines regarding the use of automated license plate recognition technology. Axon’s management followed both recommendations.

In 2022, the board recommended against a management proposal to produce a drone-mounted Taser designed to address mass shootings. After initially accepting the board’s recommendation, the company changed its mind and, in June 2022, in the wake of the Uvalde school shootings, announced it was launching the taser drone program anyway. The board’s response was dramatic: Nine of the 13 members resigned, and they released a letter that outlined their concerns. In response, the company announced a freeze on the project.

As societal expectations grow for the responsible use of digital technologies, firms that promote better practices will have a distinct advantage. According to a 2022 study, 58% of consumers, 60% of employees, and 64% of investors make key decisions based on their beliefs and values. Strengthening your organization’s digital responsibility can drive value creation, and brands regarded as more responsible will enjoy higher levels of stakeholder trust and loyalty. These businesses will sell more products and services, find it easier to recruit staff, and enjoy fruitful relationships with shareholders.

However, many organizations struggle to balance the legitimate but competing stakeholder interests. Key tensions arise between business objectives and responsible digital practices. For example, data localization requirements often contradict with the efficiency ambitions of globally distributed value chains. Ethical and responsible checks and balances that need to be introduced during AI/algorithm development tend to slow down development speed, which can be a problem when time-to-market is of utmost importance. Better data and analytics may enhance service personalization, but at the cost of customer privacy. Risks related to transparency and discrimination issues may dissuade organizations from using algorithms that could help drive cost reductions.

If managed effectively, digital responsibility can protect organizations from threats and open them up to new opportunities. Drawing from our ongoing research into digital transformations and in-depth studies of 12 large European firms across the consumer goods, financial services, information and communication technology, and pharmaceutical sectors who are active in digital responsibility, we derived four best practices to maximize business value and minimize resistance.

1. Anchor digital responsibility within your organizational values.

Digital responsibility commitments can be formulated into a charter that outlines key principles and benchmarks that your organization will adhere to. Start with a basic question: How do you define your digital responsibility objectives? The answer can often be found in your organization’s values, which is articulated in your mission statement or CSR commitments.

According to Jakob Woessner, manager of organizational development and digital transformation at cosmetics and personal care company Weleda, “our values framed what we wanted to do in the digital world, where we set our own limits, where we would go or not go.” The company’s core values are fair treatment, sustainability, integrity, and diversity. So when it came to establishing a robotics process automation program, Weleda executives were careful to ensure that it wasn’t associated with job losses, which would have violated the core value of fair treatment.

2. Extend digital responsibility beyond compliance.

While corporate values provide a useful anchor point for digital responsibility principles, relevant regulations on data privacy, IP rights, and AI cannot be overlooked. Forward-thinking organizations are taking steps to go beyond compliance and improve their behavior in areas such as cybersecurity, data protection, and privacy.

For example, UBS Banking Group’s efforts on data protection were kickstarted by GDPR compliance but have since evolved to focus more broadly on data-management practices, AI ethics, and climate-related financial disclosures. “It’s like puzzle blocks. We started with GDPR and then you just start building upon these blocks and the level moves up constantly,” said Christophe Tummers, head of service line data at the bank.

The key, we have found, is to establish a clear link between digital responsibility and value creation. One way this can be achieved is by complementing compliance efforts with a forward-looking risk-management mindset, especially in areas lacking technical implementation standards or where the law is not yet enforced. For example, Deutsche Telekom (DT) developed its own risk classification system for AI-related projects. The use of AI can expose organizations to risks associated with biased data, unsuitable modeling techniques, or inaccurate decision-making. Understanding the risks and building practices to reduce them are important steps in digital responsibility. DT includes these risks in scorecards used to evaluate technology projects.

Making digital responsibility a shared outcome also helps organizations move beyond compliance. Swiss insurance company Die Mobiliar built an interdisciplinary team consisting of representatives from compliance, business security, data science, and IT architecture.  “We structured our efforts around a common vision where business strategy and personal data work together on proactive value creation,” explains Matthias Brändle, product owner of data science and AI.

3. Set up clear governance.

Getting digital responsibility governance right is not easy. Axon had the right idea when it set up an independent AI ethics board. However, the governance was not properly thought through, so when the company disagreed with the board’s recommendation, it fell into a governance grey area marked by competing interests between the board and management.

Setting up a clear governance structure can minimize such tensions. There is an ongoing debate about whether to create a distinct team for digital responsibility or to weave responsibility throughout the organization.

Pharmaceutical company Merck took the first approach, setting up a digital ethics board to provide guidance on complex matters related to data usage, algorithms, and new digital innovations. It decided to act due to an increasing focus on AI-based approaches in drug discovery and big data applications in human resources and cancer research. The board provides recommendations for action, and any decision going against the board’s recommendation needs to be formally justified and documented.

Global insurance company Swiss Re adopted the second approach, based on the belief that digital responsibility should be part of all of the organization’s activities. “Whenever there is a digital angle, the initiative owner who normally resides in the business is responsible. The business initiative owners are supported by experts in central teams, but the business lines are accountable for its implementation,” explained Lutz Wilhelmy, SwissRe risk and regulation advisor.

Another option we’ve seen is a hybrid model, consisting of a small team of internal and external experts, who guide and support managers within the business lines to operationalize digital responsibility. The benefits of this approach includes raised awareness and distributed accountability throughout the organization.

4. Ensure employees understand digital responsibility.

Today’s employees need to not only appreciate the opportunities and risks of working with different types of technology and data, they must also be able to raise the right questions and have constructive discussions with colleagues.

Educating the workforce on digital responsibility was one of the key priorities of the Otto Group, a German e-commerce enterprise. “Lifelong learning is becoming a success factor for each and every individual, but also for the future viability of the company,” explained Petra Scharner-Wolff, member of the executive board for finance, controlling, and human resources. To kickstart its efforts, Otto developed an organization-wide digital education initiative leveraging a central platform that included scores of videos on topics related to digital ethics, responsible data practices, and how to resolve conflicts.

Learning about digital responsibility presents both a short-term challenge of upskilling the workforce, and a longer-term challenge to create a self-directed learning culture that adapts to the evolving nature of technology. As issues related to digital responsibility rarely happen in a vacuum, we recommend embedding aspects of digital responsibility into ongoing ESG skilling programs,that also focus on promoting ethical behavior considering a broader set of stakeholders. This type of contextual learning can help employees navigate the complex facets of digital responsibility in a more applied and meaningful way.

Your organization’s needs and resources will determine whether you choose to upskill your entire workforce or rely on a few specialists. A balance of both can be ideal providing a strong foundation of digital ethics knowledge and understanding across the organization, while also having experts on hand to provide specialized guidance when needed.

Digital responsibility is fast becoming an imperative for today’s organizations. Success is by no means guaranteed. Yet, by taking a proactive approach, forward-looking organizations can build and maintain responsible practices linked to their use of digital technologies. These practices not only improve digital performance, but also enhance organizational objectives.

]]>
Generative AI Will Change Your Business. Here’s How to Adapt. https://smallbiz.com/generative-ai-will-change-your-business-heres-how-to-adapt/ Wed, 12 Apr 2023 12:25:47 +0000 https://smallbiz.com/?p=99936

It’s coming. Generative AI will change the nature of how we interact with all software, and given how many brands have significant software components in how they interact with customers, generative AI will drive and distinguish how more brands compete.

In our last HBR piece, “Customer Experience in the Age of AI,” we discussed how the use of one’s customer information is already differentiating branded experiences. Now with generative AI, personalization will go even further, tailoring all aspects of digital interaction to how the customer wants it to flow, not how product designers envision cramming in more menus and features. And then as the software follows the customer, it will go to places that range beyond the tight boundaries of a brand’s product. It will need to offer solutions to things the customer wants to do. Solve the full package of what someone needs, and help them through their full journey to get there, even if it means linking to outside partners, rethinking the definition of one’s offerings, and developing the underlying data and tech architecture to connect everything involved in the solution.

Generative AI can “generate” text, speech, images, music, video, and especially code. When that capability is joined with a feed of someone’s own information, used to tailor the when, what, and how of an interaction, then the ease by which someone can get things done, and the broadening accessibility of software, goes up dramatically. The simple input question box that stands at the center of Google and now, of most generative AI systems, such as in ChatGPT and DALL-E 2, will power more systems. Say goodbye to drop down menus in software, and the inherently guided restrictions they place on how you use them. Instead, you’ll just see: “What do you want to do today?” And when you tell it what you want to do, it will likely offer some suggestions, drawing upon its knowledge of what you did last time, what triggers the systems knows about your current context, and what you’ve already stored in the system as your core goals, such as “save for a trip,” “remodel our kitchen,” “manage meal plans for my family of five with special dietary needs,” etc.

Without the boundaries of a conventional software interface, consumers will just want to get done what they need, not caring whether the brand behind the software has limitations. The change in how we interact, and what we expect, will be dramatic, and dramatically more democratizing.

So much of the hype on generative AI has focused on its ability to generate text, images, and sounds, but it also can create code to automate actions, and to facilitate pulling in external and internal data. By generating code in response to a command, it facilitates the short cut for a user that takes them from a command to an action that simply just gets done. No more working through all of the menus in the software. Even questions into and analyses of the data stored in an application will be easily done just by asking: “Who are the contacts I have not called in the last 90 days?” or “When is the next time I am scheduled to be in NYC with an opening for dinner?” To answer these questions now, we have to go into an application and gather data (possibly manually) from outside of the application itself. Now, the query can be recognized, code created, possibilities ranked, and the best answer generated. In milliseconds.

This drastically simplifies how we interact with what we think of as today’s applications. It also enables more brands to build applications as part of their value proposition. “Given the weather, traffic, and who I am with, give me a tourist itinerary for the afternoon, with an ongoing guide, and the ability to just buy any tickets in advance to skip any lines.” “Here’s my budget, here’s five pictures of my current bathroom, here’s what I want from it, now give me a renovation design, a complete plan for doing it, and the ability to put it out for bid.” Who will create these capabilities? Powerful tech companies? Brands who already have relationships in their relevant categories? New, focused disruptors? The game is just starting, but the needed capabilities and business philosophies are already taking shape.

A Broader Journey with Broader Boundaries

In a world where generative AI, and all of the other evolving AI systems proliferate, building one’s own offering requires focusing on the broadest possible view of one’s pool of data, of the journeys you can enable, and the risks they raise:

Bring data together.

Solving for a customer’s complete need will require pulling from information across your company, and likely beyond your boundaries. One of the biggest challenges for most applications, and actually for most IT departments, is bringing data together from disparate systems. Many AI systems can write the code needed to understand the schemas of two different databases, and integrate them into one repository, which can save several steps in standardizing data schema. AI teams still need to dedicate time for data cleansing and data governance (arguably even more so), for example, aligning on the right definitions of key data features. However, with AI capabilities in hand, the next steps in the process to bring all the data together become easier.

Narrative AI, for example, offers a marketplace for buying and selling data, along with data collaboration software that allows companies to import data from anywhere into their own repositories, aligned to their schema, with merely a click. Data from across a company, from partners, or from sellers of data, can be integrated and then used for modeling in a flash.

Combining one’s own proprietary data with public data, data from other available AI tools, and from many external parties can serve to dramatically improve the AI’s ability to understand one’s context, predict what is being asked, and have a broader pool from which to execute a command.

The old rule around “garbage in, garbage out” still applies, however. Especially when it comes to integrating third-party data, it is important to cross-check the accuracy with internal data before integrating it into the underlying data set. For example, one fashion brand recently found that gender data purchased from a third-party source didn’t match its internal data 50% of the time, so the source and reliability really matters.

The “rules layer” becomes even more critical.

Without obvious restrictions on what a customer can ask for in an input box, the AI needs to have guidelines that ensure it responds appropriately to things beyond its means or that are inappropriate. This amplifies the need for a sharp focus on the rules layer, where the experience designers, marketers and business decision makers set the target parameters for the AI to optimize.

For example, for an airline brand that leveraged AI to decide on the “next best conversation” to engage in with customers, we set rules around what products could be marketed to which customers, what copy could be used in which jurisdictions, and rules around anti-repetition to ensure customers didn’t get bombarded with irrelevant messages.

These constraints become even more critical in the era of generative AI. As pioneers of these solutions are finding, customers will be quick to point out when the machine “breaks” and produces non-sensical solutions. The best approaches will therefore start small, will be tailored to specific solutions where the rules can be tightly defined and human decision makers will be able to design rules for edge cases.

Deliver the end to end journey, and the specific use cases involved.

Customers will just ask for what they need, and will seek the simplest and/or most cost-effective way to get it done. What is the true end goal of the customer? How far can you get? With the ability to move information more easily across parties, you can build partnerships for data and for execution of the actions to help a customer through their journey, therefore, your ecosystem of business relationships will differentiate your brand.

In his impressive demo of how Hubspot is incorporating generative AI into “ChatSpot,” Dharmesh Shah, CTO and founder of Hubspot, lays out how they are mingling the capabilities of HubSpot with OpenAI, and with other tools. Not only does he show Hubspot’s interface reduced to just a single text entry prompt, but he also shows new capabilities that extend well beyond Hubspot’s current borders. A salesperson seeking to send an email to a business leader at a target company can use ChatSpot to perform research on the company, on the target business leader, and then draft an email that incorporates both information from the research and from what it knows about the salesperson themselves. The resulting email draft can then be edited, sent, and tracked by HubSpot’s system, and the target business leader automatically entered into a contact database with all associated information.

The power of connected information, automatic code creation, and generated output is leading many other companies to extend their borders, not as conventional “vertical,” or “horizontal” expansion, but as “journey expansion.” When you can offer “services” based on a simple user command, those commands will reflect the customer’s true goal and the total solution they seek, not just a small component that you may have been dealing with before.

Differentiate via your ecosystem.

Solving for those broader needs inevitably will pull you into new kinds of partner relationships. As you build out your end-to-end journey capabilities, how you construct those business relationships will be critical new bases for strategy. How trustworthy, how well permissioned, how timely, how comprehensive, how biased is their data. How will they use data your brand sends out? What is the basis of your relationship, quality control, and data integration? Pre-negotiated privileged partnerships? A simple vendor relationship? How are you charging for the broader service and how will the parties involved get their cut?

Similar to how search brands like Google, ecommerce marketplaces like Amazon, and recommendation engines such as Trip Advisor become gateways for sellers, more brands can become front-end navigators for a customer journey if they can offer quality partners, experience personalization, and simplicity. CVS could become a full health network coordinator that health providers, health tech, wellness services, pharma, and other support services will plug into. When its app can let you simply ask: “How can you help me lose 30 pounds,” or “How can you help me deal with my increasing arthritis,” the end-to-end program they can generate and then completely manage, through prompts to you and information passed around their network, will be a critical differentiator in how they, as a brand, build loyalty, capture your data, and use that to keep increasing service quality.

Prioritize safety, fairness, privacy, security, and transparency.

The way you manage data becomes part of your brand, and the outcomes for your customers will have edge cases and bias risks that you should seek out and mitigate. We are all reading stories of how people are pushing Generative AI systems, such as ChatGPT, to extremes, and getting back what the application’s developers call “hallucinations,” or bizarre responses. We are also seeing responses that come back as solid assertions of wrong facts. Or responses that are derived from biased bases of data that can lead to dangerous outcomes for some populations. Companies are also getting “outed” for sharing private customer information with other parties, without customer permissions, clearly not for the benefit of their customers per se.

The risks — from the core data, to the management of data, to the nature of the output of the generative AI — will simply keep multiplying. Some companies, such as American Express, have created new positions for chief customer protection officers, whose role is to stay ahead of potential risk scenarios, but more importantly, to build safeguards into how product managers are developing and managing the systems. Risk committees on corporate boards are already bringing in new experts and expanding their purviews, but more action has to happen pre-emptively. Testing data pools for bias, understanding where data came from and its copyright/accuracy/privacy risks, managing explicit customer permissions, limiting where information can go, and constantly testing the application for edge cases where customers could push it to extremes, are all critical processes to build into one’s core product management discipline, and into the questions that top management routinely has to ask. Boards will expect to see dashboards on these kinds of activities, and other external watchdogs, including lawyers representing legal challenges, will demand them as well.

Is it worth it? The risks will constantly multiply, and the costs of creating structures to manage those risks will be real. We’ve only begun to figure out how to manage bias, accuracy, copyright, privacy, and manipulated ranking risks at scale. The opacity of the systems often makes it impossible to explain how an outcome happened if some kind of audit is necessary.

But nonetheless, the capabilities of generative AI are not only available, they are the fastest growing class of applications ever. The accuracy will improve as the pool of tapped data increases, and as parallel AI systems as well as “humans in the loop” work to find and remedy those nasty “hallucinations.”.

The potential for simplicity, personalization, and democratization of access to new and existing applications will pull in not only hundreds of start-ups, but will also tempt many established brands into creating new AI-forward offerings. If they can do more than just amuse, and actually take a customer through more of the requirements of their journey than ever before, and do so in a way that inspires trust, brands could open up new sources of revenue from the services they can enable beyond their currently narrow borders. For the right use cases, speed and personalization could possibly be worth a price premium. But more likely, the automation abilities of AI will pull costs out of the overall system and put pressure on all participants to manage efficiently, and compete accordingly.

We are now opening up a new dialogue between brands and their customers. Literally. Not like the esoteric descriptions of what happened in the earlier days of digital interaction. Now we are talking back and forth. Getting things done. Together. Simply. In a trustworthy fashion. Just how the customer wants it. The race is on to see which brands can deliver.

]]>
How to Gain a Competitive Advantage on Customer Insights https://smallbiz.com/how-to-gain-a-competitive-advantage-on-customer-insights/ Thu, 20 Oct 2022 12:40:44 +0000 https://smallbiz.com/?p=78943

Companies spend billions of dollars every year to gain information about their customers, buying data from market research firms, running study after study, and using big data and sophisticated analytical models to make sense of it all. However, most of this data is likely available to your competitors and not living up to your aspiration of gaining meaningful behavioral understanding of your customers.

To truly differentiate and stay ahead on an ongoing basis, you need to implement a system of privileged insights — unique and relevant information you gain about your customers that only your company is privy to.

Unlike market research, privileged insights provide intel on your customers’ real needs, desires, and experiences. These insights can be gained in a variety of ways. Generally, it requires engaging with customers in ways that directly build trust and value. This might include offering services and solutions that go beyond products, creating a more robust and engaging customer service experience, integrating customers into product and service development, and observing and interacting with customers while they use your products.

For our recent book Beyond Digital: How Great Leaders Transform Their Organizations and Shape the Future, we’ve researched more than a dozen companies that have undertaken significant transformation to position themselves for success in the digital age, including Adobe, Cleveland Clinic, Citigroup, Eli Lilly, Hitachi, Honeywell, Inditex, Komatsu, Microsoft, Philips, STC Pay, and Titan. It isn’t that these companies necessarily use technology better or were first to build a consumer data lake — it’s that they’re incredibly focused on wiring a deep understanding of customers into the heart of their business models, their operations, and how they make day-to-day decisions. They passionately focus on increasing value for their customers, all while absorbing and leveraging a wealth of information that their competitors don’t have. By doing so, they’re able to further differentiate themselves and remain relevant.

How can you go about building such a privileged insights system that fuels your company’s success? Here are some lessons learned from the companies we studied and more that we have worked with.

Establish a foundation of trust and value

Be clear about how you earn customers’ trust to engage with you, and the benefit they gain from doing so. This goes to the heart of how customers trust you to consistently deliver outcomes they value. Customers that see their lives or businesses intrinsically linked and improved because of what you offer are much more likely to engage with you and more willing to exchange unique information and insight into their core needs and challenges.

Building a foundation of trust also includes having impeccable clarity on your values, principles, and governance around how you will treat customers’ data. Will you use the data to only advance your own commercial position, or to improve the customer experience and benefits? Will you take responsibility to not misuse the data? Will you have strict enforcement if an issue happens? Leaders must ensure that people across the organization understand that it’s not about extracting data from people and making people the product — it’s about making customers an integral partner in the value chain.

Ashley Still, senior vice president and general manager for Digital Media at Adobe, is absolutely clear about the company’s guiding principle for how it uses customer data: “We are committed to data privacy and sensitive to how we use data. Responsible use of customer data can create greater experiences, but the second we start using it to gain tactical advantage, we’ve missed the mark.”

Together with the trust and value that is embedded in their users’ experience and the value proposition Adobe offers, these practices lay the foundation on which the privileged insights system is built.

Integrate how you collect privileged insights into your day-to-day actions

Make the collection of insights a byproduct of your engagement and relationship with customers, not a separate process. This will allow you to gain customer insights while you create value for them, be it through your physical or digital interactions.

This should start with all your existing customer touch points (e.g., customer service, warranty support, product delivery, etc.) and extend to many new opportunities to engage and improve your value proposition. The ultimate question you will need to answer is whether customers feel positively impacted by the information you are collecting.

Consider fast fashion company Inditex, owner of the Zara brand. Its retail employees are trained to serve as its frontline eyes and ears, tracking data, observing customers, and gathering informal impressions — all while helping customers find the styles that suit them best. The stores compile information about the choices customers make, their inquiries about missing items, and their suggestions. Are shoppers looking for skirts or trousers? Bold or subtle colors? These impressions are sent directly to a group of designers and operational experts at headquarters, together with detailed daily data on exactly what is selling and where.

The combination with deep insights from what people are searching for and buying online puts them at a clear advantage over online-only fashion companies. All these insights are rolled up, aggregated, scaled, and analyzed almost in real time and turned into designs for new garments or into improved production, logistics, and marketing practices.

The key is in the flexibility to adapt to customer preferences and the precision to create and produce what customers are asking for, at the moment they are asking for it. At the end of the year, Inditex’s more than 700 designers will have come up with 60,000 different creations, and the stores worldwide will have received new waves of collections twice per week.

Wire your privileged insights into how you work

Put your privileged insights to work by connecting them into your operations — changing structures, processes, incentives, metrics, information flows, etc., to enable every part of the business to make decisions that are based on your unique insights.

The most obvious (though not always well-executed) example of this involves wiring privileged insights into your company’s innovation process, by using them as the basis for ideation and looking for many ways in which customers can be integrated in the actual development process (for example, in beta pilots). But privileged insights need to be linked to many areas beyond innovation, including the determination of investments in tools and technologies that facilitate ongoing experiences, the interaction of your selling and customer teams, and your forecasting and strategic planning. Be prepared for those insights to materially change the fundamentals of your business, not just lead to incremental changes or a new feature in some of your products. And rethink how you measure the impact of your privileged insights capability; the metrics that most companies use today don’t go nearly far enough, and more innovative measures like, for example, return on experience (RoX), should be considered by companies pursuing this capability.

Consider Salesforce. From its inception, Salesforce has been acutely aware of the need to build its business on trust — not surprising given the sensitivity of the data customers share on the platform. This values-based relationship with users allows the company to gain deep insights into what works well, what needs improvement, and additional services customers would like to get.

These insights directly feed into and fuel Salesforce’s product development strategy and allow the company to extend its value proposition. With a customer success orientation at the heart of the relationship Salesforce builds with its customers, the company has established a unique platform that allows it to leverage insights from customer usage data to inform strategies that enhance long-term customer value and thereby drive customer retention and growth. These insights enable Salesforce to more effectively co-develop solutions with partners and customers, tailor them to various industries, and offer them as part of their platform as new industry clouds. This unique system of product development and innovation fueled by proprietary customer insights is one of the key factors that has made Salesforce the fastest-growing software company of all time.

The power of a privileged insights system stems from its self-reinforcing nature: The more customers trust your company and derive value from the your products and services, the more likely they are to open up and engage with you. The more they do so, the more insights you’ll be able to gain about what customers want and need; and the more insights you have and the better you are at wiring those insights into everything you do, the more you can improve your customer experience, products, and services and build additional trust and connection with customers. It’s a true flywheel.

For the flywheel to work and fuel your company’s success, you need to work on all three of these areas, starting with a brutally honest assessment of the real gaps you may have across each area and realizing that creating a system of privileged insights will not come without meaningful transformation.

It’s easy to see how neglecting one area is going to keep the whole system from working. Indeed, if customers don’t trust you, they’re not going to open up. If providing insights is a one-way street, it may only appeal to the most loyal and passionate of your customers. And if you’re letting your customers down and don’t act on the feedback, you will most probably not get a second chance to get it right.

This is a big task and requires a fundamentally different way of thinking about data, research, and the entire cycle of touch points with customers. But it’s one that any company in any industry needs to take on to stay relevant. We can think of no other capability that is so universally needed.

]]>