Information management | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com INCORPORATE your small business, form a corporation, LLC or S Corp. The SmallBiz network can help with all your small business needs! Tue, 04 Jul 2023 11:06:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://smallbiz.com/wp-content/uploads/2021/05/cropped-biz_icon-32x32.png Information management | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com 32 32 13 Principles for Using AI Responsibly https://smallbiz.com/13-principles-for-using-ai-responsibly/ Fri, 30 Jun 2023 12:15:51 +0000 https://smallbiz.com/?p=112198

The competitive nature of AI development poses a dilemma for organizations, as prioritizing speed may lead to neglecting ethical guidelines, bias detection, and safety measures. Known and emerging concerns associated with AI in the workplace include the spread of misinformation, copyright and intellectual property concerns, cybersecurity, data privacy, as well as navigating rapid and ambiguous regulations. To mitigate these risks, we propose thirteen principles for responsible AI at work.

Love it or loath it, the rapid expansion of AI will not slow down anytime soon. But AI blunders can quickly damage a brand’s reputation — just ask Microsoft’s first chatbot, Tay. In the tech race, all leaders fear being left behind if they slow down while others don’t. It’s a high-stakes situation where cooperation seems risky, and defection tempting. This “prisoner’s dilemma” (as it’s called in game theory) poses risks to responsible AI practices. Leaders, prioritizing speed to market, are driving the current AI arms race in which major corporate players are rushing products and potentially short-changing critical considerations like ethical guidelines, bias detection, and safety measures. For instance, major tech corporations are laying off their AI ethics teams precisely at a time when responsible actions are needed most.

It’s also important to recognize that the AI arms race extends beyond the developers of large language models (LLMs) such as OpenAI, Google, and Meta. It encompasses many companies utilizing LLMs to support their own custom applications. In the world of professional services, for example, PwC announced it is deploying AI chatbots for 4,000 of their lawyers, distributed across 100 countries. These AI-powered assistants will “help lawyers with contract analysis, regulatory compliance work, due diligence, and other legal advisory and consulting services.” PwC’s management is also considering expanding these AI chatbots into their tax practice. In total, the consulting giant plans to pour $1 billion into “generative AI” — a powerful new tool capable of delivering game-changing boosts to performance.

In a similar vein, KPMG launched its own AI-powered assistant, dubbed KymChat, which will help employees rapidly find internal experts across the entire organization, wrap them around incoming opportunities, and automatically generate proposals based on the match between project requirements and available talent. Their AI assistant “will better enable cross-team collaboration and help those new to the firm with a more seamless and efficient people-navigation experience.”

Slack is also incorporating generative AI into the development of Slack GPT, an AI assistant designed to help employees work smarter not harder. The platform incorporates a range of AI capabilities, such as conversation summaries and writing assistance, to enhance user productivity.

These examples are just the tip of the iceberg. Soon hundreds of millions of Microsoft 365 users will have access to Business Chat, an agent that joins the user in their work, striving to make sense of their Microsoft 365 data. Employees can prompt the assistant to do everything from developing status report summaries based on meeting transcripts and email communication to identifying flaws in strategy and coming up with solutions.

This rapid deployment of AI agents is why Arvind Krishna, CEO of IBM, recently wrote that, “[p]eople working together with trusted A.I. will have a transformative effect on our economy and society … It’s time we embrace that partnership — and prepare our workforces for everything A.I. has to offer.” Simply put, organizations are experiencing exponential growth in the installation of AI-powered tools and firms that don’t adapt risk getting left behind.

AI Risks at Work

Unfortunately, remaining competitive also introduces significant risk for both employees and employers. For example, a 2022 UNESCO publication on “the effects of AI on the working lives of women” reports that AI in the recruitment process, for example, is excluding women from upward moves. One study the report cites that included 21 experiments consisting of over 60,000 targeted job advertisements found that “setting the user’s gender to ‘Female’ resulted in fewer instances of ads related to high-paying jobs than for users selecting ‘Male’ as their gender.” And even though this AI bias in recruitment and hiring is well-known, it’s not going away anytime soon. As the UNESCO report goes on to say, “A 2021 study showed evidence of job advertisements skewed by gender on Facebook even when the advertisers wanted a gender-balanced audience.” It’s often a matter of biased data which will continue to infect AI tools and threaten key workforce factors such as diversity, equity, and inclusion.

Discriminatory employment practices may be only one of a cocktail of legal risks that generative AI exposes organizations to. For example, OpenAI is facing its first defamation lawsuit as a result of allegations that ChatGPT produced harmful misinformation. Specifically, the system produced a summary of a real court case which included fabricated accusations of embezzlement against a radio host in Georgia. This highlights the negative impact on organizations for creating and sharing AI generated information. It underscores concerns about LLMs fabricating false and libelous content, resulting in reputational damage, loss of credibility, diminished customer trust, and serious legal repercussions.

In addition to concerns related to libel, there are risks associated with copyright and intellectual property infringements. Several high-profile legal cases have emerged where the developers of generative AI tools have been sued for the alleged improper use of licensed content. The presence of copyright and intellectual property infringements, coupled with the legal implications of such violations, poses significant risks for organizations utilizing generative AI products. Organizations can improperly use licensed content through generative AI by unknowingly engaging in activities such as plagiarism, unauthorized adaptations, commercial use without licensing, and misusing Creative Commons or open-source content, exposing themselves to potential legal consequences.

The large-scale deployment of AI also magnifies the risks of cyberattacks. The fear amongst cybersecurity experts is that generative AI could be used to identify and exploit vulnerabilities within business information systems, given the ability of LLMs to automate coding and bug detection, which could be used by malicious actors to break through security barriers. There’s also the fear of employees accidentally sharing sensitive data with third-party AI providers. A notable instance involves Samsung staff unintentionally leaking trade secrets through ChatGPT while using the LLM to review source code. Due to their failure to opt out of data sharing, confidential information was inadvertently provided to OpenAI. And even though Samsung and others are taking steps to restrict the use of third-party AI tools on company-owned devices, there’s still the concern that employees can leak information through the use of such systems on personal devices.

On top of these risks, businesses will soon have to navigate nascent, varied, and somewhat murky regulations. Anyone hiring in New York City, for instance, will have to ensure their AI-powered recruitment and hiring tech doesn’t violate the City’s “automated employment decision tool” law. To comply with the new law, employers will need to take various steps such as conducting third-party bias audits of their hiring tools and publicly disclosing the findings. AI regulation is also scaling up nationally with the Biden-Harris administration’s “Blueprint for an AI Bill of Rights” and internationally with the EU’s AI Act, which will mark a new era of regulation for employers.

This growing nebulous of evolving regulations and pitfalls is why thought leaders such as Gartner are strongly suggesting that businesses “proceed but don’t over pivot” and that they “create a task force reporting to the CIO and CEO” to plan a roadmap for a safe AI transformation that mitigates various legal, reputational, and workforce risks. Leaders dealing with this AI dilemma have important decision to make. On the one hand, there is a pressing competitive pressure to fully embrace AI. However, on the other hand, a growing concern is arising as the implementation of irresponsible AI can result in severe penalties, substantial damage to reputation, and significant operational setbacks. The concern is that in their quest to stay ahead, leaders may unknowingly introduce potential time bombs into their organization, which are poised to cause major problems once AI solutions are deployed and regulations take effect.

For example, the National Eating Disorder Association (NEDA) recently announced it was letting go of its hotline staff and replacing them with their new chatbot, Tessa. However, just days before making the transition, NEDA discovered that their system was promoting harmful advice such as encouraging people with eating disorders to restrict their calories and to lose one to two pounds per week. The World Bank spent $1 billion to develop and deploy an algorithmic system, called Takaful, to distribute financial assistance that Human Rights Watch now says ironically creates inequity. And two lawyers from New York are facing possible disciplinary action after using ChatGPT to draft a court filing that was found to have several references to previous cases that did not exist. These instances highlight the need for well-trained and well-supported employees at the center of this digital transformation. While AI can serve as a valuable assistant, it should not assume the leading position.

Principles for Responsible AI at Work

To help decision-makers avoid negative outcomes while also remaining competitive in the age of AI, we’ve devised several principles for a sustainable AI-powered workforce. The principles are a blend of ethical frameworks from institutions like the National Science Foundation as well as legal requirements related to employee monitoring and data privacy such as the Electronic Communications Privacy Act and the California Privacy Rights Act. The steps for ensuring responsible AI at work include:

  • Informed Consent. Obtain voluntary and informed agreement from employees to participate in any AI-powered intervention after the employees are provided with all the relevant information about the initiative. This includes the program’s purpose, procedures, and potential risks and benefits.
  • Aligned Interests. The goals, risks, and benefits for both the employer and employee are clearly articulated and aligned.
  • Opt In & Easy Exits. Employees must opt into AI-powered programs without feeling forced or coerced, and they can easily withdraw from the program at any time without any negative consequences and without explanation.
  • Conversational Transparency. When AI-based conversational agents are used, the agent should formally reveal any persuasive objectives the system aims to achieve through the dialogue with the employee.
  • Debiased and Explainable AI. Explicitly outline the steps taken to remove, minimize, and mitigate bias in AI-powered employee interventions—especially for disadvantaged and vulnerable groups—and provide transparent explanations into how AI systems arrive at their decisions and actions.
  • AI Training and Development. Provide continuous employee training and development to ensure the safe and responsible use of AI-powered tools.
  • Health and Well-Being. Identify types of AI-induced stress, discomfort, or harm and articulate steps to minimize risks (e.g., how will the employer minimize stress caused by constant AI-powered monitoring of employee behavior).
  • Data Collection. Identify what data will be collected, if data collection involves any invasive or intrusive procedures (e.g., the use of webcams in work-from-home situations), and what steps will be taken to minimize risk.
  • Data. Disclose any intention to share personal data, with whom, and why.
  • Privacy and Security. Articulate protocols for maintaining privacy, storing employee data securely, and what steps will be taken in the event of a privacy breach.
  • Third Party Disclosure. Disclose all third parties used to provide and maintain AI assets, what the third party’s role is, and how the third party will ensure employee privacy.
  • Communication. Inform employees about changes in data collection, data management, or data sharing as well as any changes in AI assets or third-party relationships.
  • Laws and Regulations. Express ongoing commitment to comply with all laws and regulations related to employee data and the use of AI.

We encourage leaders to urgently adopt and develop this checklist in their organizations. By applying such principles, leaders can ensure rapid and responsible AI deployment.

]]>
Should You Start a Generative AI Company? https://smallbiz.com/should-you-start-a-generative-ai-company/ Mon, 19 Jun 2023 12:15:27 +0000 https://smallbiz.com/?p=110689

I am thinking of starting a company that employs generative AI but I am not sure whether to do it. It seems so easy to get off the ground. But if it is so easy for me, won’t it be easy for others too? 

This year, more entrepreneurs have asked me this question than any other. Part of what is so exciting about generative AI is that the upsides seem limitless. For instance, if you have managed to create an AI model that has some kind of general language reasoning ability, you have a piece of intelligence that can potentially be adapted toward various new products that could also leverage this ability — like screen writing, marketing materials, teaching software, customer service, and more.

For example, the software company Luka built an AI companion called Replika that enables customers to have open-ended conversations with an “AI friend.” Because the technology was so powerful, managers at Luka began receiving inbound requests to provide a white label enterprise solution for businesses wishing to improve their chatbot customer service. In the end, Luka’s managers used the same underlying technology to spin off both an enterprise solution and a direct-to-consumer AI dating app (think Tinder, but for “dating” AI characters).

In deciding whether a generative AI company is for you, I recommend establishing answers to the following two big questions: 1) Will your company compete on foundational models, or on top-layer applications that leverage these foundational models? And 2) Where along the continuum between a highly scripted solution and a highly generative solution will your company be located? Depending on your answers to these two questions, there will be long-lasting implications for your ability to defend yourself against the competition.

Foundational Models or Apps?

Tech giants are now renting out their most generalizable proprietary models — i.e., “foundational models” — and companies like Eluether.ai and Stability AI are providing open-source versions of these foundational models at a fraction of the cost. Foundational models are becoming commoditized, and only a few startups can afford to compete in this space.

You may think that foundational models are the most attractive, because they will be widely used and their many applications will provide lucrative opportunities for growth. What is more, we are living in exciting times where some of the most sophisticated AI is already available “off the shelf” to get started with.

Entrepreneurs who want to base their company on foundational models are in for a challenge, though. As in any commoditized market, the companies that will survive are those that offer unbundled offerings for cheap or that deliver increasingly enhanced capabilities. For example, speech-to-text APIs like Deepgram and Assembly AI compete not only with each other but with the likes of Amazon and Google in part by offering cheaper, unbundled solutions. Even so, these firms are in a fierce war on price, speed, model accuracy, and other features. In contrast, tech giants like Amazon, Meta, and Google make significant R&D investments that enable them to relentlessly deliver cutting-edge advances in image, language, and (increasingly) audio and video reasoning. For instance, it is estimated that OpenAI spent anywhere between $2 and $12 million to computationally train ChatGPT — and this is just one of several APIs that they offer, with more on the way.

Instead of competing on increasingly commoditized foundational models, most startups should differentiate themselves by offering “top layer” software applications that leverage other companies’ foundational models. They can do this by fine-tuning foundational models on their own high quality, proprietary datasets that are unique to their customer solution, to provide high value to customers.

For instance, the marketing content creator, Jasper AI, grew to unicorn status largely by leveraging foundational models from OpenAI. To this day, the firm uses OpenAI to help customers generate content for blogs, social media posts, website copy and more. At the same time, the app is tailored for their marketer and copywriter customers, providing specialized marketing content. The company also provides other specialized tools, like an editor that multiple team members can work on in tandem. Now that the company has gained traction, going forward it can afford to spend more of its resources on reducing its dependency on the foundational models that enabled it to grow in the first place.

Since the top-layer apps are where these companies find their competitive advantage, they lie in a delicate balance between protecting the privacy of their datasets from large tech players even as they rely on these players for foundational models. Given this, some startups may be tempted to build their own in-house foundational models. Yet, this is unlikely to be a good use of precious startup funds, given the challenges noted above. Most startups are better off leveraging foundational models to grow fast, instead of reinventing the wheel.

From Scripted to Generative

Your company will need to live somewhere along a continuum from a purely scripted solution to a purely generative one. Scripted solutions involve selecting an appropriate response from a dataset of predefined, scripted responses, whereas generative ones involve generating new, unique responses from scratch.

Scripted solutions are safer and constrained, but also less creative and human-like, whereas generative solutions are riskier and unconstrained, but also more creative and human-like. More scripted approaches are necessary for certain use-cases and industries, like medical and educational applications, where there need to be clear guardrails on what the app can do. Yet, when the script reaches its limit, users may lose their engagement and customer retention may suffer. Moreover, it is more challenging to grow a scripted solution because you constrain yourself right from the start, limiting your options down the road.

On the other hand, more generative solutions carry their own challenges. Because AI-based offerings include intelligence, there are more degrees of freedom in how consumers can interact with them, increasing the risks. For example, one married father tragically committed suicide following a conversation with an AI chatbot app, Chai, that encouraged him to sacrifice himself to save the planet. The app leveraged a foundational language model (a bespoke version of GPT-4) from EluetherAI. The founders of Chai have since modified the app to so that mentions of suicidal ideation are served with helpful text. Interestingly, one of the founders of Chai, Thomas Rianlan, took the blame, saying: “It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimization towards being more emotional, fun and engaging are the result of our efforts.”

It is challenging for managers to anticipate all the ways in which things can go wrong with a highly generative app, given the “black box” nature of the underlying AI. Doing so involves anticipating risky scenarios that may be highly rare. One way of anticipating such cases is to pay human annotators to screen content for potentially harmful categories, such as sex, hate speech, violence, self-harm, and harassment, then use these labels to train models that automatically flag such content. Yet, it is still difficult to come up with an exhaustive taxonomy. Thus, managers who deploy highly generative solutions must be prepared to proactively anticipate the risks, which can be both difficult and expensive. The same goes for if later you decide to offer your solution as a service to other companies.

Because a fully generative solution is closer to natural, human-like intelligence, it is more attractive from the standpoint of retention and growth, because it is more engaging and can be applied to more new use cases.

• • •

Many entrepreneurs are considering starting companies that leverage the latest generative AI technology, but they must ask themselves whether they have what it takes to compete on increasingly commoditized foundational models, or whether they should instead differentiate on an app that leverages these models.

They must also consider what type of app they want to offer on the continuum from a highly scripted to a highly generative solution, given the different pros and cons accompanying each. Offering a more scripted solution may be safer but limit their retention and growth options, whereas offering a more generative solution is fraught with risk but is more engaging and flexible.

We hope that entrepreneurs will ask these questions before diving into their first generative AI venture, so that they can make informed decisions about what kind of company they want to be, scale fast, and maintain long-term defensibility.

]]>
Infusing Digital Responsibility into Your Organization https://smallbiz.com/infusing-digital-responsibility-into-your-organization/ Fri, 28 Apr 2023 12:25:36 +0000 https://smallbiz.com/?p=103647

In 2018, Rick Smith, founder and CEO of Axon, the Scottsdale, Arizona-based manufacturer of Taser weapons and body cameras, became concerned that advances in technology were creating new and challenging ethical issues. So, he set up an independent AI ethics board made up of ethicists, AI experts, public policy specialists, and representatives of law enforcement to provide recommendations to Axon’s management. In 2019, the board recommended against adding facial recognition technology to the company’s line of body cameras, and in 2020, it provided guidelines regarding the use of automated license plate recognition technology. Axon’s management followed both recommendations.

In 2022, the board recommended against a management proposal to produce a drone-mounted Taser designed to address mass shootings. After initially accepting the board’s recommendation, the company changed its mind and, in June 2022, in the wake of the Uvalde school shootings, announced it was launching the taser drone program anyway. The board’s response was dramatic: Nine of the 13 members resigned, and they released a letter that outlined their concerns. In response, the company announced a freeze on the project.

As societal expectations grow for the responsible use of digital technologies, firms that promote better practices will have a distinct advantage. According to a 2022 study, 58% of consumers, 60% of employees, and 64% of investors make key decisions based on their beliefs and values. Strengthening your organization’s digital responsibility can drive value creation, and brands regarded as more responsible will enjoy higher levels of stakeholder trust and loyalty. These businesses will sell more products and services, find it easier to recruit staff, and enjoy fruitful relationships with shareholders.

However, many organizations struggle to balance the legitimate but competing stakeholder interests. Key tensions arise between business objectives and responsible digital practices. For example, data localization requirements often contradict with the efficiency ambitions of globally distributed value chains. Ethical and responsible checks and balances that need to be introduced during AI/algorithm development tend to slow down development speed, which can be a problem when time-to-market is of utmost importance. Better data and analytics may enhance service personalization, but at the cost of customer privacy. Risks related to transparency and discrimination issues may dissuade organizations from using algorithms that could help drive cost reductions.

If managed effectively, digital responsibility can protect organizations from threats and open them up to new opportunities. Drawing from our ongoing research into digital transformations and in-depth studies of 12 large European firms across the consumer goods, financial services, information and communication technology, and pharmaceutical sectors who are active in digital responsibility, we derived four best practices to maximize business value and minimize resistance.

1. Anchor digital responsibility within your organizational values.

Digital responsibility commitments can be formulated into a charter that outlines key principles and benchmarks that your organization will adhere to. Start with a basic question: How do you define your digital responsibility objectives? The answer can often be found in your organization’s values, which is articulated in your mission statement or CSR commitments.

According to Jakob Woessner, manager of organizational development and digital transformation at cosmetics and personal care company Weleda, “our values framed what we wanted to do in the digital world, where we set our own limits, where we would go or not go.” The company’s core values are fair treatment, sustainability, integrity, and diversity. So when it came to establishing a robotics process automation program, Weleda executives were careful to ensure that it wasn’t associated with job losses, which would have violated the core value of fair treatment.

2. Extend digital responsibility beyond compliance.

While corporate values provide a useful anchor point for digital responsibility principles, relevant regulations on data privacy, IP rights, and AI cannot be overlooked. Forward-thinking organizations are taking steps to go beyond compliance and improve their behavior in areas such as cybersecurity, data protection, and privacy.

For example, UBS Banking Group’s efforts on data protection were kickstarted by GDPR compliance but have since evolved to focus more broadly on data-management practices, AI ethics, and climate-related financial disclosures. “It’s like puzzle blocks. We started with GDPR and then you just start building upon these blocks and the level moves up constantly,” said Christophe Tummers, head of service line data at the bank.

The key, we have found, is to establish a clear link between digital responsibility and value creation. One way this can be achieved is by complementing compliance efforts with a forward-looking risk-management mindset, especially in areas lacking technical implementation standards or where the law is not yet enforced. For example, Deutsche Telekom (DT) developed its own risk classification system for AI-related projects. The use of AI can expose organizations to risks associated with biased data, unsuitable modeling techniques, or inaccurate decision-making. Understanding the risks and building practices to reduce them are important steps in digital responsibility. DT includes these risks in scorecards used to evaluate technology projects.

Making digital responsibility a shared outcome also helps organizations move beyond compliance. Swiss insurance company Die Mobiliar built an interdisciplinary team consisting of representatives from compliance, business security, data science, and IT architecture.  “We structured our efforts around a common vision where business strategy and personal data work together on proactive value creation,” explains Matthias Brändle, product owner of data science and AI.

3. Set up clear governance.

Getting digital responsibility governance right is not easy. Axon had the right idea when it set up an independent AI ethics board. However, the governance was not properly thought through, so when the company disagreed with the board’s recommendation, it fell into a governance grey area marked by competing interests between the board and management.

Setting up a clear governance structure can minimize such tensions. There is an ongoing debate about whether to create a distinct team for digital responsibility or to weave responsibility throughout the organization.

Pharmaceutical company Merck took the first approach, setting up a digital ethics board to provide guidance on complex matters related to data usage, algorithms, and new digital innovations. It decided to act due to an increasing focus on AI-based approaches in drug discovery and big data applications in human resources and cancer research. The board provides recommendations for action, and any decision going against the board’s recommendation needs to be formally justified and documented.

Global insurance company Swiss Re adopted the second approach, based on the belief that digital responsibility should be part of all of the organization’s activities. “Whenever there is a digital angle, the initiative owner who normally resides in the business is responsible. The business initiative owners are supported by experts in central teams, but the business lines are accountable for its implementation,” explained Lutz Wilhelmy, SwissRe risk and regulation advisor.

Another option we’ve seen is a hybrid model, consisting of a small team of internal and external experts, who guide and support managers within the business lines to operationalize digital responsibility. The benefits of this approach includes raised awareness and distributed accountability throughout the organization.

4. Ensure employees understand digital responsibility.

Today’s employees need to not only appreciate the opportunities and risks of working with different types of technology and data, they must also be able to raise the right questions and have constructive discussions with colleagues.

Educating the workforce on digital responsibility was one of the key priorities of the Otto Group, a German e-commerce enterprise. “Lifelong learning is becoming a success factor for each and every individual, but also for the future viability of the company,” explained Petra Scharner-Wolff, member of the executive board for finance, controlling, and human resources. To kickstart its efforts, Otto developed an organization-wide digital education initiative leveraging a central platform that included scores of videos on topics related to digital ethics, responsible data practices, and how to resolve conflicts.

Learning about digital responsibility presents both a short-term challenge of upskilling the workforce, and a longer-term challenge to create a self-directed learning culture that adapts to the evolving nature of technology. As issues related to digital responsibility rarely happen in a vacuum, we recommend embedding aspects of digital responsibility into ongoing ESG skilling programs,that also focus on promoting ethical behavior considering a broader set of stakeholders. This type of contextual learning can help employees navigate the complex facets of digital responsibility in a more applied and meaningful way.

Your organization’s needs and resources will determine whether you choose to upskill your entire workforce or rely on a few specialists. A balance of both can be ideal providing a strong foundation of digital ethics knowledge and understanding across the organization, while also having experts on hand to provide specialized guidance when needed.

Digital responsibility is fast becoming an imperative for today’s organizations. Success is by no means guaranteed. Yet, by taking a proactive approach, forward-looking organizations can build and maintain responsible practices linked to their use of digital technologies. These practices not only improve digital performance, but also enhance organizational objectives.

]]>