Digital transformation | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com INCORPORATE your small business, form a corporation, LLC or S Corp. The SmallBiz network can help with all your small business needs! Mon, 10 Jul 2023 12:52:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://smallbiz.com/wp-content/uploads/2021/05/cropped-biz_icon-32x32.png Digital transformation | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com 32 32 What Will Working with AI Really Require? https://smallbiz.com/what-will-working-with-ai-really-require/ Thu, 08 Jun 2023 12:25:10 +0000 https://smallbiz.com/?p=109381

Despite concerns about machines replacing human workers, research challenges the overhyped claims of ascendant AI. In most knowledge-intensive tasks, workers will more likely find themselves augmented in partnership with machines than automated out of a job. Humans and machines will simultaneously collaborate and compete with one another, like a track team competing in various events. In some events, like the 100-yard dash, teammates compete against each other, but in others, such as the relay race, they work together towards a common goal.

In such a relationship, humans and AI systems both need distinct competitive and cooperative skills. Competitive skills refer to the unique advantages that either humans or AI possess over the other, while cooperative skills enhance the ability of humans and AI to work together effectively. To foster a symbiotic relationship between humans and AI, organizations must find the appropriate balance between investing in human skills and technological capabilities — and think strategically about how they attract and retain talent.

Humans’ competitive and cooperative skills

AI may not replace workers in a human-centered workplace, but it could fundamentally transform their work. In order to remain relevant and indispensable, humans need to work with and against the machines.

Humans’ cooperative skills

Effectively collaborating with AI systems — working with them — requires data-driven analytical abilities, but also understanding about the capabilities and limitations of the machines (areas where human intervention is most required), how to interpret and contextualize AI-generated insights, and the ethical considerations of AI-powered decision making. These include:

Data-centric skills: The ability to understand the results generated by algorithms to inform and support decision-making. A recent survey highlighted (1) the ability to distinguish relevant data and evaluate its credibility, (2) capability to validate results by testing hypotheses through A/B testing, and (3) skill in creating and tailoring clear and comprehensible visualizations to communicate results to multiple stakeholders.

AI literacy: Understanding how algorithms work, how they can support and augment human decision-making as well as the limitations and biases that may be present in their decision-making processes. Area experts will likely take on the responsibility of developing fairness criteria for algorithmic outcomes that promote equity, especially for vulnerable populations, and continuously auditing algorithmic results against these criteria.

Algorithmic communication: Understanding how to articulate human needs and objectives to algorithms, as well as how to interpret and explain the results generated by algorithms to others is important and research shows that we often err by talking to machines – even advanced AI tools – as though they were human. We do better when we recognize that we should talk to machines in specific ways that build on their strengths. For example, through “prompt engineering,” or crafting prompts to elicit most effective responses from AI systems, humans can teach AI models to produce the desired results for specific tasks.

Humans’ competitive skills

People also need to hone the human-centered skills and abilities that cannot be replicated by machines — that help them work against AI partners — such as those rooted in emotional intelligence (e.g., communication skills for interacting with other human stakeholders), a strategic and holistic perspective, critical thinking and intuitive decision making. These include:

Emotional intelligence: The ability to recognize one’s own emotions and reflect on them in the context of interacting with algorithms, as well as understanding and communicating the emotional implications of algorithm-generated results. For example, the human customer service agents may not solely rely on scripts or real-time advice provided by the AI agents, but instead personalize solutions by empathetically comprehending the customers’ requirements or feelings.

Holistic and strategic thinking: The ability to consider the big picture and understand how algorithmic results fit into the larger context of a problem or decision. For example, algorithmic inference can inform pathologists, but they still need to consider factors such as patients’ medical history, lifestyle, and overall health to arrive at an informed and comprehensive diagnosis.

Creativity and outside-the-box thinking: The ability to think creatively and use algorithms in novel and innovative ways. For instance, AI systems are used to analyze massive consumer data and identify patterns in the interests and behavior of a target audience, but it is the creative thinking of marketers that will craft a message that resonates with the audience.

Critical and ethical thinking: The ability to critically assess machine inferences, and to understand the ethical implications and responsibilities associated with using algorithms, including privacy and accountability. As generative AI such as ChatGPT are increasingly integrated into various products, experts in different business domains are needed to work alongside these systems to continuously address potential false or biased information to which these systems are prone.

AI’s competitive and cooperative skills

It is not only humans that must acquire new capabilities. While AI systems are rapidly expanding their competitive abilities over humans, they still need to improve their cooperative skills in order to be widely adopted by organizations. In particular, the lack of explainability remains a challenge in high-stakes decisions, hindering accountability and compliance with legal requirements. For example, if the AI’s decision-making process remains opaque to medical professionals, it will impede the adoption of these systems in healthcare, even if these systems deliver near-optimal decisions.

AI’s cooperative skills

To work effectively with human partners, AI systems need skills such as:

NLP (Natural Language Processing): The ability to process, analyze, understand and mimic human language. Systems like ChatGPT excel at interacting with humans because they make it easy for people to ask questions and express themselves in a natural way, including expressing emotions like excitement, frustration, or surprise. The reality, however, is that these systems are far from sentient. Situations that go beyond a function are best done by a human or with human supervision. For example, AI can analyze and reveal patterns in healthcare data, but it should not replace a physician’s role in providing individualized care to patients.

Explainability: The ability to provide humans with clear and understandable explanations of its decision-making process and results. The inherent inscrutability of deep-learning AI is an ongoing challenge that requires multiple solutions, including building an “explainability framework” that addresses the risks of AI black boxes to specific industries and organizations. Technological solutions may also involve adding explainability engines, which offer human-readable explanations for AI ‘systems’ decisions and predictions, particularly for critical areas like healthcare and finance.

Adaptability and personalization: The ability to learn from previous interactions and personalize responses based on individual users. For example, personal intelligent assistants are growing in importance in helping people tackle information and communication overload. By analyzing a user’s activities, these assistants work collaboratively with workers in an individualized manner, enhancing their productivity in areas such as time management, meeting organization, and communication assistance.

Context awareness: The ability to understand the context in which an interaction is taking place and respond accordingly. For instance, in e-commerce websites, chatbots that present context awareness can analyze a user’s previous inquiries and purchase history to offer solutions or recommendations that are more pertinent to the customer’s needs.

AI’s competitive skills

AI systems continue to present unique competitive advantages, such as:

Analytical capacities: The ability to perform complex calculations, process large amounts of data, and identify patterns and relationships within the data. For example, AI systems are becoming more competent at detecting fraudulent transactions in massive datasets of credit card transactions.

Generativity: The ability to generate novel and unique outputs that are not simply reproductions of existing data. Using large models and neural networks to analyze patterns, generative AI is transforming the creation of image, text, and even music that resembles those created by human experts. These systems automated content generation, improve content quality, increase content variety, and offer personalized content.

Performance at scale: The ability to scale operations efficiently, handle a large number of real-time transactions and support large-scale applications without sacrificing performance. For example, AI systems have demonstrated superior ability to process thousands of credit card applications real-time or offer “algorithmic management” of thousands of Uber drivers and riders simultaneously, creating a structured and consistent operational framework at an unprecedented scale.

Racing with and against the machines

The challenge for organizations trying to build a strategy for using new and more AI tools lies in designing organizational systems that effectively balance the competitive and cooperative skills of humans and AI. Organizations that seek to strike this balance should consider the following:

Democratize data to foster the continuous development of competitive human and machine skills. AI systems can generate data insights at scale and detect patterns often missed by the human eye, but translating that competitiveness into business growth and agility requires the very human skills of strategic thinking and creativity. To enable this type of collaboration, companies should democratize access to data throughout all levels of their organization. Nearly every role within your organization should be working alongside data analytics to inform how to make the workflows more efficient, make data-driven decisions and ultimately inform a better understanding of how to service the end customer. The more data visibility AI can give your workforce, the greater ability for humans to apply and develop their uniquely competitive skills.

Look outside your own organization’s walls for cooperative human skills. A recent Deloitte study found that nearly half (49%) of traditional workers — full-time employees — updated their skills more than a year ago or have never engaged in skills development, whereas 60% of the alternative workforce — defined as gig workers, freelancers, independent workers, and crowd workers — updated their skills within the past six months. In fact, 44% of alternative workers at large organizations hold a postgraduate degree according to new research by Upwork. This is likely due to the fact that most technical skill sets, according to research by IBM, experience a half-life of 2.5 years. And, according to Upwork’s database, the top in-demand skills are technical and related to web, mobile and software development. If your organization is struggling to keep up with cooperative human skills to work alongside machines, it may be time to engage a broader ecosystem of skills outside your organization.

Don’t let geography limit the skills your company is hiring for. The pandemic ushered in a new era of work as many organizations learned work could be done remotely. Technical work can be done almost anywhere in the world as machines have largely made geography irrelevant to finding the skills you need to cooperate with machines. Enabling remote work strategies will ensure your organization is equipped to capture the ever-changing talent landscape and help you win in the race with and against the machines.

By focusing on the balance of these skills, organizations can reap the benefits of an infinity loop between AI and human competitive skills. In this balance, humans may work toward “coopetition” as an arrangement where parties engage in both cooperative and competitive behavior. In such a relationship with AI systems, humans may leverage both the partnership with machines and their own competitive edge against the machine. This relationship helps to maintain their relevance and indispensability as algorithms are increasingly working as team members or even managers (i.e., algorithmic management).

This formulation offered here helps shape the future of education and skill development, by emphasizing the importance of focusing on skills that give humans a competitive advantage over machines, rather than those that we have already lost to machines. For example, the use of calculators and spell checkers no longer serve as our advantage as we surrendered these tasks to technology long ago.

]]>
Managing the Risks of Generative AI https://smallbiz.com/managing-the-risks-of-generative-ai/ Tue, 06 Jun 2023 12:15:47 +0000 https://smallbiz.com/?p=109074

Corporate leaders, academics, policymakers, and countless others are looking for ways to harness generative AI technology, which has the potential to transform the way we learn, work, and more. In business, generative AI has the potential to transform the way companies interact with customers and drive business growth. New research shows 67% of senior IT leaders are prioritizing generative AI for their business within the next 18 months, with one-third (33%) naming it as a top priority. Companies are exploring how it could impact every part of the business, including sales, customer service, marketing, commerce, IT, legal, HR, and others.

However, senior IT leaders need a trusted, data-secure way for their employees to use these technologies. Seventy-nine-percent of senior IT leaders reported concerns that these technologies bring the potential for security risks, and another 73% are concerned about biased outcomes. More broadly, organizations must recognize the need to ensure the ethical, transparent, and responsible use of these technologies.

A business using generative AI technology in an enterprise setting is different from consumers using it for private, individual use. Businesses need to adhere to regulations relevant to their respective industries (think: healthcare), and there’s a minefield of legal, financial, and ethical implications if the content generated is inaccurate, inaccessible, or offensive. For example, the risk of harm when an generative AI chatbot gives incorrect steps for cooking a recipe is much lower than when giving a field service worker instructions for repairing a piece of heavy machinery. If not designed and deployed with clear ethical guidelines, generative AI can have unintended consequences and potentially cause real harm. 

Organizations need a clear and actionable framework for how to use generative AI and to align their generative AI goals with their businesses’ “jobs to be done,” including how generative AI will impact sales, marketing, commerce, service, and IT jobs.

In 2019, we published our trusted AI principles (transparency, fairness, responsibility, accountability, and reliability), meant to guide the development of ethical AI tools. These can apply to any organization investing in AI. But these principles only go so far if organizations lack an ethical AI practice to operationalize them into the development and adoption of AI technology. A mature ethical AI practice operationalizes its principles or values through responsible product development and deployment — uniting disciplines such as product management, data science, engineering, privacy, legal, user research, design, and accessibility — to mitigate the potential harms and maximize the social benefits of AI. There are models for how organizations can start, mature, and expand these practices, which provide clear roadmaps for how to build the infrastructure for ethical AI development.

But with the mainstream emergence — and accessibility — of generative AI, we recognized that organizations needed guidelines specific to the risks this specific technology presents. These guidelines don’t replace our principles, but instead act as a North Star for how they can be operationalized and put into practice as businesses develop products and services that use this new technology.

Guidelines for the ethical development of generative AI

Our new set of guidelines can help organizations evaluate generative AI’s risks and considerations as these tools gain mainstream adoption. They cover five focus areas.

Accuracy

Organizations need to be able to train AI models on their own data to deliver verifiable results that balance accuracy, precision, and recall (the model’s ability to correctly identify positive cases within a given dataset). It’s important to communicate when there is uncertainty regarding generative AI responses and enable people to validate them. This can be done by citing the sources where the model is pulling information from in order to create content, explaining why the AI gave the response it did, highlighting uncertainty, and creating guardrails preventing some tasks from being fully automated.

Safety

Making every effort to mitigate bias, toxicity, and harmful outputs by conducting bias, explainability, and robustness assessments is always a priority in AI. Organizations must protect the privacy of any personally identifying information present in the data used for training to prevent potential harm. Further, security assessments can help organizations identify vulnerabilities that may be exploited by bad actors (e.g., “do anything now” prompt injection attacks that have been used to override ChatGPT’s guardrails).

Honesty

When collecting data to train and evaluate our models, respect data provenance and ensure there is consent to use that data. This can be done by leveraging open-source and user-provided data. And, when autonomously delivering outputs, it’s a necessity to be transparent that an AI has created the content. This can be done through watermarks on the content or through in-app messaging.

Empowerment

While there are some cases where it is best to fully automate processes, AI should more often play a supporting role. Today, generative AI is a great assistant. In industries where building trust is a top priority, such as in finance or healthcare, it’s important that humans be involved in decision-making — with the help of data-driven insights that an AI model may provide — to build trust and maintain transparency. Additionally, ensure the model’s outputs are accessible to all (e.g., generate ALT text to accompany images, text output is accessible to a screen reader). And of course, one must treat content contributors, creators, and data labelers with respect (e.g., fair wages, consent to use their work).

Sustainability

Language models are described as “large” based on the number of values or parameters it uses. Some of these large language models (LLMs) have hundreds of billions of parameters and use a lot of energy and water to train them. For example, GPT3 took 1.287 gigawatt hours or about as much electricity to power 120 U.S. homes for a year, and 700,000 liters of clean freshwater.

When considering AI models, larger doesn’t always mean better. As we develop our own models, we will strive to minimize the size of our models while maximizing accuracy by training on models on large amounts of high-quality CRM data. This will help reduce the carbon footprint because less computation is required, which means less energy consumption from data centers and carbon emission.

Integrating generative AI

Most organizations will integrate generative AI tools rather than build their own. Here are some tactical tips for safely integrating generative AI in business applications to drive business results:

Use zero-party or first-party data

Companies should train generative AI tools using zero-party data — data that customers share proactively — and first-party data, which they collect directly. Strong data provenance is key to ensuring models are accurate, original, and trusted. Relying on third-party data, or information obtained from external sources, to train AI tools makes it difficult to ensure that output is accurate.

For example, data brokers may have old data, incorrectly combine data from devices or accounts that don’t belong to the same person, and/or make inaccurate inferences based on the data. This applies for our customers when we are grounding the models in their data. So in Marketing Cloud, if the data in a customer’s CRM all came from data brokers, the personalization may be wrong.

Keep data fresh and well-labeled

AI is only as good as the data it’s trained on. Models that generate responses to customer support queries will produce inaccurate or out-of-date results if the content it is grounded in is old, incomplete, and inaccurate. This can lead to hallucinations, in which a tool confidently asserts that a falsehood is real. Training data that contains bias will result in tools that propagate bias.

Companies must review all datasets and documents that will be used to train models, and remove biased, toxic, and false elements. This process of curation is key to principles of safety and accuracy.

Ensure there’s a human in the loop

Just because something can be automated doesn’t mean it should be. Generative AI tools aren’t always capable of understanding emotional or business context, or knowing when they’re wrong or damaging.

Humans need to be involved to review outputs for accuracy, suss out bias, and ensure models are operating as intended. More broadly, generative AI should be seen as a way to augment human capabilities and empower communities, not replace or displace them.

Companies play a critical role in responsibly adopting generative AI, and integrating these tools in ways that enhance, not diminish, the working experience of their employees, and their customers. This comes back to ensuring the responsible use of AI in maintaining accuracy, safety, honesty, empowerment, and sustainability, mitigating risks, and eliminating biased outcomes. And, the commitment should extend beyond immediate corporate interests, encompassing broader societal responsibilities and ethical AI practices.

Test, test, test

Generative AI cannot operate on a set-it-and-forget-it basis — the tools need constant oversight. Companies can start by looking for ways to automate the review process by collecting metadata on AI systems and developing standard mitigations for specific risks.

Ultimately, humans also need to be involved in checking output for accuracy, bias and hallucinations. Companies can consider investing in ethical AI training for front-line engineers and managers so they’re prepared to assess AI tools. If resources are constrained, they can prioritize testing models that have the most potential to cause harm.

Get feedback

Listening to employees, trusted advisors, and impacted communities is key to identifying risks and course-correcting. Companies can create a variety of pathways for employees to report concerns, such as an anonymous hotline, a mailing list, a dedicated Slack or social media channel or focus groups. Creating incentives for employees to report issues can also be effective.

Some organizations have formed ethics advisory councils — composed of employees from across the company, external experts, or a mix of both — to weigh in on AI development. Finally, having open lines of communication with community stakeholders is key to avoiding unintended consequences.

• • •

With generative AI going mainstream, enterprises have the responsibility to ensure that they’re using this technology ethically and mitigating potential harm. By committing to guidelines and having guardrails in advance, companies can ensure that the tools they deploy are accurate, safe and trusted, and that they help humans flourish.

Generative AI is evolving quickly, so the concrete steps businesses need to take will evolve over time. But sticking to a firm ethical framework can help organizations navigate this period of rapid transformation.

]]>