Web-based technologies | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com INCORPORATE your small business, form a corporation, LLC or S Corp. The SmallBiz network can help with all your small business needs! Mon, 26 Jun 2023 12:13:06 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.2 https://smallbiz.com/wp-content/uploads/2021/05/cropped-biz_icon-32x32.png Web-based technologies | SmallBiz.com - What your small business needs to incorporate, form an LLC or corporation! https://smallbiz.com 32 32 What Roles Could Generative AI Play on Your Team? https://smallbiz.com/what-roles-could-generative-ai-play-on-your-team/ Thu, 22 Jun 2023 12:15:19 +0000 https://smallbiz.com/?p=111073

The frenzy surrounding the launch of Large Language Models (LLMs) and other types of Generative AI (GenAI) isn’t going to fade anytime soon. Users of GenAI are discovering and recommending new and interesting use cases for their business and personal lives. Many recommendations start with the assumption that GenAI requires a human prompt. Indeed, Time magazine recently proclaimed “prompt engineering” to be the next hot job, with salaries reaching $335,000. Tech forums and educational websites are focusing on prompt engineering, with Udemy already offering a course on the topic, and several organizations we work with are now beginning to invest considerable resources in training employees on how best to use ChatGPT.

However, it may be worth pausing to consider other ways of interacting with GPT technologies, which are likely to emerge soon. We present an intuitive way to think about this issue, which is based on our own survey of GenAI developments, combined with conversations with companies that are seeking to develop some versions of these.

A Framework of GPT Interactions

A good starting point is to distinguish between who is involved in the interaction — individuals, groups of people, or another machine — and who starts the interaction, human or machine. This leads to six different types of GenAI uses, shown below. ChatGPT, where one human initiates interaction with the machine is already well-known. We now describe each of the other GPTs and outline their potential.

CoachGPT is a personal assistant that provides you with a set of suggestions on managing your daily life. It would base these suggestions not on explicit prompts from you, but on the basis of observing what you do and your environment. For example, it could observe you as an executive and note that you find it hard to build trust in your team; it could then recommend precise actions to overcome this blind spot. It could also come up with personalized advice on development options or even salary negotiations.

CoachGPT would subsequently see which recommendations you adopted or didn’t adopt, and which benefited you and which ones didn’t to improve its advice over time. With time, you would get a highly personalized AI advisor, coach, or consultant.

Organizations could adopt CoachGPT to advise customers on how to use a product, whether a construction company offering CoachGPT to advise end users on how best to use its equipment, or an accounting firm proffering real-time advice on how best to account for a set of transactions.

To make CoachGPT effective, individuals and organizations would have to allow it to work in the background, monitoring online and offline activities. Clearly, serious privacy considerations need to be addressed before we entrust our innermost thoughts to the system. However, the potential for positive outcomes in both private and professional lives is immense.

GroupGPT would be a bona fide group member that can observe interactions between group members and contribute to the discussion. For example, it could conduct fact checking, supply a summary of the conversation, suggest what to discuss next, play the role of devil’s advocate, provide a competitor perspective, stress-test the ideas, or even propose a creative solution to the problem at hand.

The requests could come from individual group members or from the team’s boss, who need not participate in team interactions, but merely seeks to manage, motivate, and evaluate group members. The contribution could be delivered to the whole group or to specific individuals, with adjustments for that person’s role, skill, or personality.

The privacy concerns mentioned above also apply to GroupGPT, but, if addressed, organizations could take advantage of GroupGPT by using it for project management, especially on long and complicated projects involving relatively large teams across different departments or regions. Since GroupGPT would overcome human limitations on information storage and processing capacity, it would be ideal for supporting complex and dispersed teams.

BossGPT takes an active role in advising a group of people on what they could or should do, without being prompted. It could provide individual recommendations to group members, but its real value emerges when it begins to coordinate the work of group members, telling them as a group who should do what to maximize team output. BossGPT could also step in to offer individual coaching and further recommendations as the project and team dynamics evolve.

The algorithms necessary for BossGPT to work would be much more complicated as they would have to consider somewhat unpredictable individual and group reactions to instructions from a machine, but it could have a wide range of uses. For example: an executive changing job could request a copy of her reactions to her first organization’s BossGPT instructions, which could then be used to assess how she would fit into the new organization — and the new organization-specific BossGPT.

At the organizational level companies could deploy BossGPT to manage people, thereby augmenting — or potentially even replacing — existing managers. Similarly, BossGPT has tremendous applications in coordinating work across organizations and managing complex supply chains or multiple suppliers.

Companies could turn BossGPT into a product, offering their customers AI solutions to help them manage their business. These solutions could be natural extensions of the CoachGPT examples described earlier. For example, a company selling construction equipment could offer BossGPT to coordinate many end users on a construction site, and an accounting firm could provide it to coordinate the work of many employees of its customers to run the accounting function in the most efficient way.

AutoGPT entails a human giving a request or prompt to one machine, which in turn engages other machines to complete the task. In its simplest form, a human might instruct a machine to complete a task, but the machine realizes that it lacks a specific software to execute it, so it would search for the missing software on Google before downloading and installing it, and then using it to finish the request.

In a more complicated version, humans could give AutoGPT a goal (such as creating the best viral YouTube video) and instruct it to interact with another GenAI to iteratively come up with the best ChatGPT prompt to achieve the goal. The machine would then launch the process by proposing a prompt to another machine, then evaluate the outcome, and adjust the prompt to get closer and closer to the final goal.

In the most complicated version, AutoGPT could draw on functionalities of the other GPTs described above. For example, a team leader could task a machine with maximizing both the effectiveness and job satisfaction of her team members. AutoGPT could then switch between coaching individuals through CoachGPT, providing them with suggestions for smoother team interactions through GroupGPT, while at the same time issuing specific instructions on what needs to be done through BossGPT. AutoGPT could subsequently collect feedback from each activity and adjust all the other activities to reach the given goal.

Unlike the above versions, which are still to be created, a version of AutoGPT has been developed and was rolled out in April 2023, and it’s quickly gaining broad acceptance. The technology is still not perfect and requires improvements, but it is already evident that AutoGPT is able to complete a set of jobs that requires the completion of several tasks one after the other.

We see its biggest applications in complex tasks, such as supply chain coordination, but also in fields such as cybersecurity. For example, organizations could prompt AutoGPT to continually address any cybersecurity vulnerabilities, which would entail looking for them — which already happens — but then instead of simply flagging them, AutoGPT would search for solutions to the threats or write its own patches to counter them. A human might still be in the loop, but since the system is self-generative within these limits, we believe that AutoGPT’s response is likely to be faster and more efficient.

ImperialGPT is the most abstract GenAI — and perhaps the most transformational — in which two or more machines would interact with each other, direct each other, and ultimately direct humans to engage in a course of action. This type of GPT worries most AI analysts, who fear losing control of AI and AI “going rogue.” We concur with these concerns, particularly if — as now — there are no strict guardrails on what AI is allowed to do.

At the same time, if ImperialGPT is allowed to come up with ideas and share them with humans, but its ability to act on the ideas is restricted, we believe that this could generate extremely interesting creative solutions especially for “unknown unknowns,” where human knowledge and creativity fall short. They could then easily envision and game out multiple black swan events and worst-case scenarios, complete with potential costs and outcomes, to provide possible solutions.

Given the potential dangers of ImperialGPT, and the need for tight regulation, we believe that ImperialGPT will be slow to take off, at least commercially. We do anticipate, however, that governments, intelligence services, and the military will be interested in deploying ImperialGPT under strictly controlled conditions.

Implications for your Business

So, what does our framework mean for companies and organizations around the world? First and foremost, we encourage you to step back and see the recent advances in ChatGPT as merely the first application of new AI technologies. Second, we urge you to think about the various applications outlined here and use our framework to develop applications for your own company or organization. In the process, we are sure you will discover new types of GPTs that we have not mentioned. Third, we suggest you classify these different GPTs in terms of potential value to your business, and the cost of developing them.

We believe that applications that begin with a single human initiating or participating in the interaction (GroupGPT, CoachGPT) will probably be the easiest to build and should generate substantial business value, making them the perfect initial candidates. In contrast, applications with interactions involving multiple entities or those initiated by machines (AutoGPT, BossGPT, and ImperialGPT) may be harder to implement, with trickier ethical and legal implications.

You might also want to start thinking about the complex ethical, legal, and regulatory concerns that will arise with each GPT type. Failure to do so exposes you and your company to both legal liabilities and — perhaps more importantly — an unintended negative effect on humanity.

Our next set of recommendations depends on your company type. A tech company or startup, or one that has ample resources to invest in these technologies, should start working on developing one or more of the GPTs discussed above. This is clearly a high-risk, high-reward strategy.

In contrast, if your competitive strength is not in GenAI or if you lack resources, you might be better off adopting a “wait and see” approach. This means you will be slow to adopt the current technology, but you will not waste valuable resources on what may turn out to be only an interim version of a product. Instead, you can begin preparing your internal systems to better capture and store data as well as readying your organization to embrace these new GPTs, in terms of both work processes and culture.

The launch and rapid adoption of GenAIs is rightly being considered as the next level in the evolution of AI and a potentially epochal moment for humanity in general. Although GenAIs represent breakthroughs in solving fundamental engineering and computer science problems, they do not automatically guarantee value creation for all organizations. Rather, smart companies will need to invest in modifying and adapting the core technology before figuring out the best way to monetize the innovations. Firms that do this right may indeed strike it rich in the GenAI goldrush.

]]>
Should You Start a Generative AI Company? https://smallbiz.com/should-you-start-a-generative-ai-company/ Mon, 19 Jun 2023 12:15:27 +0000 https://smallbiz.com/?p=110689

I am thinking of starting a company that employs generative AI but I am not sure whether to do it. It seems so easy to get off the ground. But if it is so easy for me, won’t it be easy for others too? 

This year, more entrepreneurs have asked me this question than any other. Part of what is so exciting about generative AI is that the upsides seem limitless. For instance, if you have managed to create an AI model that has some kind of general language reasoning ability, you have a piece of intelligence that can potentially be adapted toward various new products that could also leverage this ability — like screen writing, marketing materials, teaching software, customer service, and more.

For example, the software company Luka built an AI companion called Replika that enables customers to have open-ended conversations with an “AI friend.” Because the technology was so powerful, managers at Luka began receiving inbound requests to provide a white label enterprise solution for businesses wishing to improve their chatbot customer service. In the end, Luka’s managers used the same underlying technology to spin off both an enterprise solution and a direct-to-consumer AI dating app (think Tinder, but for “dating” AI characters).

In deciding whether a generative AI company is for you, I recommend establishing answers to the following two big questions: 1) Will your company compete on foundational models, or on top-layer applications that leverage these foundational models? And 2) Where along the continuum between a highly scripted solution and a highly generative solution will your company be located? Depending on your answers to these two questions, there will be long-lasting implications for your ability to defend yourself against the competition.

Foundational Models or Apps?

Tech giants are now renting out their most generalizable proprietary models — i.e., “foundational models” — and companies like Eluether.ai and Stability AI are providing open-source versions of these foundational models at a fraction of the cost. Foundational models are becoming commoditized, and only a few startups can afford to compete in this space.

You may think that foundational models are the most attractive, because they will be widely used and their many applications will provide lucrative opportunities for growth. What is more, we are living in exciting times where some of the most sophisticated AI is already available “off the shelf” to get started with.

Entrepreneurs who want to base their company on foundational models are in for a challenge, though. As in any commoditized market, the companies that will survive are those that offer unbundled offerings for cheap or that deliver increasingly enhanced capabilities. For example, speech-to-text APIs like Deepgram and Assembly AI compete not only with each other but with the likes of Amazon and Google in part by offering cheaper, unbundled solutions. Even so, these firms are in a fierce war on price, speed, model accuracy, and other features. In contrast, tech giants like Amazon, Meta, and Google make significant R&D investments that enable them to relentlessly deliver cutting-edge advances in image, language, and (increasingly) audio and video reasoning. For instance, it is estimated that OpenAI spent anywhere between $2 and $12 million to computationally train ChatGPT — and this is just one of several APIs that they offer, with more on the way.

Instead of competing on increasingly commoditized foundational models, most startups should differentiate themselves by offering “top layer” software applications that leverage other companies’ foundational models. They can do this by fine-tuning foundational models on their own high quality, proprietary datasets that are unique to their customer solution, to provide high value to customers.

For instance, the marketing content creator, Jasper AI, grew to unicorn status largely by leveraging foundational models from OpenAI. To this day, the firm uses OpenAI to help customers generate content for blogs, social media posts, website copy and more. At the same time, the app is tailored for their marketer and copywriter customers, providing specialized marketing content. The company also provides other specialized tools, like an editor that multiple team members can work on in tandem. Now that the company has gained traction, going forward it can afford to spend more of its resources on reducing its dependency on the foundational models that enabled it to grow in the first place.

Since the top-layer apps are where these companies find their competitive advantage, they lie in a delicate balance between protecting the privacy of their datasets from large tech players even as they rely on these players for foundational models. Given this, some startups may be tempted to build their own in-house foundational models. Yet, this is unlikely to be a good use of precious startup funds, given the challenges noted above. Most startups are better off leveraging foundational models to grow fast, instead of reinventing the wheel.

From Scripted to Generative

Your company will need to live somewhere along a continuum from a purely scripted solution to a purely generative one. Scripted solutions involve selecting an appropriate response from a dataset of predefined, scripted responses, whereas generative ones involve generating new, unique responses from scratch.

Scripted solutions are safer and constrained, but also less creative and human-like, whereas generative solutions are riskier and unconstrained, but also more creative and human-like. More scripted approaches are necessary for certain use-cases and industries, like medical and educational applications, where there need to be clear guardrails on what the app can do. Yet, when the script reaches its limit, users may lose their engagement and customer retention may suffer. Moreover, it is more challenging to grow a scripted solution because you constrain yourself right from the start, limiting your options down the road.

On the other hand, more generative solutions carry their own challenges. Because AI-based offerings include intelligence, there are more degrees of freedom in how consumers can interact with them, increasing the risks. For example, one married father tragically committed suicide following a conversation with an AI chatbot app, Chai, that encouraged him to sacrifice himself to save the planet. The app leveraged a foundational language model (a bespoke version of GPT-4) from EluetherAI. The founders of Chai have since modified the app to so that mentions of suicidal ideation are served with helpful text. Interestingly, one of the founders of Chai, Thomas Rianlan, took the blame, saying: “It wouldn’t be accurate to blame EleutherAI’s model for this tragic story, as all the optimization towards being more emotional, fun and engaging are the result of our efforts.”

It is challenging for managers to anticipate all the ways in which things can go wrong with a highly generative app, given the “black box” nature of the underlying AI. Doing so involves anticipating risky scenarios that may be highly rare. One way of anticipating such cases is to pay human annotators to screen content for potentially harmful categories, such as sex, hate speech, violence, self-harm, and harassment, then use these labels to train models that automatically flag such content. Yet, it is still difficult to come up with an exhaustive taxonomy. Thus, managers who deploy highly generative solutions must be prepared to proactively anticipate the risks, which can be both difficult and expensive. The same goes for if later you decide to offer your solution as a service to other companies.

Because a fully generative solution is closer to natural, human-like intelligence, it is more attractive from the standpoint of retention and growth, because it is more engaging and can be applied to more new use cases.

• • •

Many entrepreneurs are considering starting companies that leverage the latest generative AI technology, but they must ask themselves whether they have what it takes to compete on increasingly commoditized foundational models, or whether they should instead differentiate on an app that leverages these models.

They must also consider what type of app they want to offer on the continuum from a highly scripted to a highly generative solution, given the different pros and cons accompanying each. Offering a more scripted solution may be safer but limit their retention and growth options, whereas offering a more generative solution is fraught with risk but is more engaging and flexible.

We hope that entrepreneurs will ask these questions before diving into their first generative AI venture, so that they can make informed decisions about what kind of company they want to be, scale fast, and maintain long-term defensibility.

]]>
Infusing Digital Responsibility into Your Organization https://smallbiz.com/infusing-digital-responsibility-into-your-organization/ Fri, 28 Apr 2023 12:25:36 +0000 https://smallbiz.com/?p=103647

In 2018, Rick Smith, founder and CEO of Axon, the Scottsdale, Arizona-based manufacturer of Taser weapons and body cameras, became concerned that advances in technology were creating new and challenging ethical issues. So, he set up an independent AI ethics board made up of ethicists, AI experts, public policy specialists, and representatives of law enforcement to provide recommendations to Axon’s management. In 2019, the board recommended against adding facial recognition technology to the company’s line of body cameras, and in 2020, it provided guidelines regarding the use of automated license plate recognition technology. Axon’s management followed both recommendations.

In 2022, the board recommended against a management proposal to produce a drone-mounted Taser designed to address mass shootings. After initially accepting the board’s recommendation, the company changed its mind and, in June 2022, in the wake of the Uvalde school shootings, announced it was launching the taser drone program anyway. The board’s response was dramatic: Nine of the 13 members resigned, and they released a letter that outlined their concerns. In response, the company announced a freeze on the project.

As societal expectations grow for the responsible use of digital technologies, firms that promote better practices will have a distinct advantage. According to a 2022 study, 58% of consumers, 60% of employees, and 64% of investors make key decisions based on their beliefs and values. Strengthening your organization’s digital responsibility can drive value creation, and brands regarded as more responsible will enjoy higher levels of stakeholder trust and loyalty. These businesses will sell more products and services, find it easier to recruit staff, and enjoy fruitful relationships with shareholders.

However, many organizations struggle to balance the legitimate but competing stakeholder interests. Key tensions arise between business objectives and responsible digital practices. For example, data localization requirements often contradict with the efficiency ambitions of globally distributed value chains. Ethical and responsible checks and balances that need to be introduced during AI/algorithm development tend to slow down development speed, which can be a problem when time-to-market is of utmost importance. Better data and analytics may enhance service personalization, but at the cost of customer privacy. Risks related to transparency and discrimination issues may dissuade organizations from using algorithms that could help drive cost reductions.

If managed effectively, digital responsibility can protect organizations from threats and open them up to new opportunities. Drawing from our ongoing research into digital transformations and in-depth studies of 12 large European firms across the consumer goods, financial services, information and communication technology, and pharmaceutical sectors who are active in digital responsibility, we derived four best practices to maximize business value and minimize resistance.

1. Anchor digital responsibility within your organizational values.

Digital responsibility commitments can be formulated into a charter that outlines key principles and benchmarks that your organization will adhere to. Start with a basic question: How do you define your digital responsibility objectives? The answer can often be found in your organization’s values, which is articulated in your mission statement or CSR commitments.

According to Jakob Woessner, manager of organizational development and digital transformation at cosmetics and personal care company Weleda, “our values framed what we wanted to do in the digital world, where we set our own limits, where we would go or not go.” The company’s core values are fair treatment, sustainability, integrity, and diversity. So when it came to establishing a robotics process automation program, Weleda executives were careful to ensure that it wasn’t associated with job losses, which would have violated the core value of fair treatment.

2. Extend digital responsibility beyond compliance.

While corporate values provide a useful anchor point for digital responsibility principles, relevant regulations on data privacy, IP rights, and AI cannot be overlooked. Forward-thinking organizations are taking steps to go beyond compliance and improve their behavior in areas such as cybersecurity, data protection, and privacy.

For example, UBS Banking Group’s efforts on data protection were kickstarted by GDPR compliance but have since evolved to focus more broadly on data-management practices, AI ethics, and climate-related financial disclosures. “It’s like puzzle blocks. We started with GDPR and then you just start building upon these blocks and the level moves up constantly,” said Christophe Tummers, head of service line data at the bank.

The key, we have found, is to establish a clear link between digital responsibility and value creation. One way this can be achieved is by complementing compliance efforts with a forward-looking risk-management mindset, especially in areas lacking technical implementation standards or where the law is not yet enforced. For example, Deutsche Telekom (DT) developed its own risk classification system for AI-related projects. The use of AI can expose organizations to risks associated with biased data, unsuitable modeling techniques, or inaccurate decision-making. Understanding the risks and building practices to reduce them are important steps in digital responsibility. DT includes these risks in scorecards used to evaluate technology projects.

Making digital responsibility a shared outcome also helps organizations move beyond compliance. Swiss insurance company Die Mobiliar built an interdisciplinary team consisting of representatives from compliance, business security, data science, and IT architecture.  “We structured our efforts around a common vision where business strategy and personal data work together on proactive value creation,” explains Matthias Brändle, product owner of data science and AI.

3. Set up clear governance.

Getting digital responsibility governance right is not easy. Axon had the right idea when it set up an independent AI ethics board. However, the governance was not properly thought through, so when the company disagreed with the board’s recommendation, it fell into a governance grey area marked by competing interests between the board and management.

Setting up a clear governance structure can minimize such tensions. There is an ongoing debate about whether to create a distinct team for digital responsibility or to weave responsibility throughout the organization.

Pharmaceutical company Merck took the first approach, setting up a digital ethics board to provide guidance on complex matters related to data usage, algorithms, and new digital innovations. It decided to act due to an increasing focus on AI-based approaches in drug discovery and big data applications in human resources and cancer research. The board provides recommendations for action, and any decision going against the board’s recommendation needs to be formally justified and documented.

Global insurance company Swiss Re adopted the second approach, based on the belief that digital responsibility should be part of all of the organization’s activities. “Whenever there is a digital angle, the initiative owner who normally resides in the business is responsible. The business initiative owners are supported by experts in central teams, but the business lines are accountable for its implementation,” explained Lutz Wilhelmy, SwissRe risk and regulation advisor.

Another option we’ve seen is a hybrid model, consisting of a small team of internal and external experts, who guide and support managers within the business lines to operationalize digital responsibility. The benefits of this approach includes raised awareness and distributed accountability throughout the organization.

4. Ensure employees understand digital responsibility.

Today’s employees need to not only appreciate the opportunities and risks of working with different types of technology and data, they must also be able to raise the right questions and have constructive discussions with colleagues.

Educating the workforce on digital responsibility was one of the key priorities of the Otto Group, a German e-commerce enterprise. “Lifelong learning is becoming a success factor for each and every individual, but also for the future viability of the company,” explained Petra Scharner-Wolff, member of the executive board for finance, controlling, and human resources. To kickstart its efforts, Otto developed an organization-wide digital education initiative leveraging a central platform that included scores of videos on topics related to digital ethics, responsible data practices, and how to resolve conflicts.

Learning about digital responsibility presents both a short-term challenge of upskilling the workforce, and a longer-term challenge to create a self-directed learning culture that adapts to the evolving nature of technology. As issues related to digital responsibility rarely happen in a vacuum, we recommend embedding aspects of digital responsibility into ongoing ESG skilling programs,that also focus on promoting ethical behavior considering a broader set of stakeholders. This type of contextual learning can help employees navigate the complex facets of digital responsibility in a more applied and meaningful way.

Your organization’s needs and resources will determine whether you choose to upskill your entire workforce or rely on a few specialists. A balance of both can be ideal providing a strong foundation of digital ethics knowledge and understanding across the organization, while also having experts on hand to provide specialized guidance when needed.

Digital responsibility is fast becoming an imperative for today’s organizations. Success is by no means guaranteed. Yet, by taking a proactive approach, forward-looking organizations can build and maintain responsible practices linked to their use of digital technologies. These practices not only improve digital performance, but also enhance organizational objectives.

]]>
New Cybersecurity Regulations Are Coming. Here’s How to Prepare. https://smallbiz.com/new-cybersecurity-regulations-are-coming-heres-how-to-prepare/ Mon, 29 Aug 2022 12:05:35 +0000 https://smallbiz.com/?p=74311

Cybersecurity has reached a tipping point. After decades of private-sector organizations more or less being left to deal with cyber incidents on their own, the scale and impact of cyberattacks means that the fallout from these incidents can ripple across societies and borders.

Now, governments feel a need to “do something,” and many are considering new laws and regulations. Yet lawmakers often struggle to regulate technology — they respond to political urgency, and most don’t have a firm grasp on the technology they’re aiming to control. The consequences, impacts, and uncertainties on companies are often not realized until afterward.

In the United States, a whole suite of new regulations and enforcement are in the offing: the Federal Trade Commission, Food and Drug Administration, Department of Transportation, Department of Energy, and Cybersecurity and Infrastructure Security Agency are all working on new rules. In addition, in 2021 alone, 36 states enacted new cybersecurity legislation. Globally, there are many initiatives such as China and Russia’s data localization requirements, India’s CERT-In incident reporting requirements, and the EU’s GDPR and its incident reporting.

Companies don’t need to just sit by and wait for the rules to be written and then implemented, however. Rather, they need to be working now to understand the kinds of regulations that are presently being considered, ascertain the uncertainties and potential impacts, and prepare to act.

What We Don’t Know About Cyberattacks

To date, most countries’ cybersecurity-related regulations have been focused on privacy rather than cybersecurity, thus most cybersecurity attacks are not required to be reported. If private information is stolen, such as names and credit card numbers, that must be reported to the appropriate authority. But, for instance, when Colonial Pipeline suffered a ransomware attack that caused it to shut down the pipeline that provided fuel to nearly 50% of the U.S. east coast, it wasn’t required to report it because no personal information was stolen. (Of course, it is hard to keep things secret when thousands of gasoline stations can’t get fuel.)

As a result, it’s almost impossible to know how many cyberattacks there really are, and what form they take. Some have suggested that only 25% of cybersecurity incidents are reported, others say only about 18%, others say that 10% or less are reported.

The truth is that we don’t know what we don’t know. This is a terrible situation. As the management guru Peter Drucker famously said: “If you can’t measure it, you can’t manage it.”

What Needs To Be Reported, by Whom, and When?

Governments have decided that this approach is untenable. In the United States, for instance, the White House, Congress, the Securities and Exchange Commission (SEC), and many other agencies and local governments are considering, pursuing, or starting to enforce new rules that would require companies to report cyber incidents — especially critical infrastructure industries, such as energy, health care, communications and financial services. Under these new rules, Colonial Pipeline would be required to report a ransomware attack.

To an extent, these requirements have been inspired by the reporting recommended for “near misses” or “close calls” for aircraft: When aircraft come close to crashing, they’re required to file a report, so that failures that cause such events can be identified and avoided in the future.

On its face, a similar requirement for cybersecurity seems very reasonable. The problem is, what should count as a cybersecurity “incident” is much less clear than the “near miss” of two aircraft being closer than allowed. A cyber “incident” is something that could have led to a cyber breach, but does not need to have become an actual cyber breach: By one official definition, it only requires an action that “imminently jeopardizes” a system or presents an “imminent threat” of violating a law.

This leaves companies navigating a lot of gray area, however. For example, if someone tries to log in to your system but is denied because the password is wrong. Is that an “imminent threat”? What about a phishing email? Or someone searching for a known, common vulnerability, such as the log4j vulnerability, in your system? What if an attacker actually got into your system, but was discovered and expelled before any harm had been done?

This ambiguity requires companies and regulators to strike a balance. All companies are safer when there’s more information about what attackers are trying to do, but that requires companies to report meaningful incidents in a timely manner. For example, based on data gathered from current incident reports, we learned that just 288 out of the nearly 200,000 known vulnerabilities in the National Vulnerability Database (NVD) are actively being exploited in ransomware attacks. Knowing this allows companies to prioritize addressing these vulnerabilities.

On the other hand, using an overly broad definition might mean that a typical large company might be required to report thousands of incidents per day, even if most were spam emails that were ignored or repelled. This would be an enormous burden both on the company to produce these reports as well as the agency that would need to process and make sense out of such a deluge of reports.

International companies will also need to navigate the different reporting standards in the European Union, Australia, and elsewhere, including how quickly a report must be filed — whether that’s six hours in India, 72 hours in the EU under GDPR, or four business days in the Unites States, and often many variations in each country since there is a flood of regulations coming out of diverse agencies.

What Companies Can Do Now

Make sure your procedures are up to the task.

Companies subject to SEC regulations, which includes most large companies in the United States, need to quickly define “materiality” and review their current policies and procedures for determining whether “materiality” applies, in light of these new regulations. They’ll likely need to revise them to streamline their operation — especially if such decisions must be done frequently and quickly.

Keep ransomware policies up to date.

Regulations are also being formulated in areas such as reporting ransomware attacks and even making it a crime to pay a ransom. Company policies regarding paying ransomware need to be reviewed, along with likely changes to cyberinsurance policies.

Prepare for required “Software Bill of Materials” in order to better vet your digital supply chain.

Many companies did not know that they had the log4j vulnerability in their systems because that software was often bundled with other software that was bundled with other software. There are regulations being proposed to require companies to maintain a detailed and up-to-date Software Bill of Materials (SBOM) so that they can quickly and accurately know all the different pieces of software embedded in their complex computer systems.

Although an SBOM is useful for other purposes too, it may require significant changes to the ways that software is developed and acquired in your company. The impact of these changes needs to be reviewed by management.

What More Should You Do?

Someone, or likely a group in your company, should be reviewing these new or proposed regulations and evaluate what impacts they will have on your organization. These are rarely just technical details left to your information technology or cybersecurity team — they have companywide implications and likely changes to many policies and procedures throughout your organization. To the extent that most of these new regulations are still malleable, your organization may want to actively influence what directions these regulations take and how they are implemented and enforced.

Acknowledgement: This research was supported, in part, by funds from the members of the Cybersecurity at MIT Sloan (CAMS) consortium.

]]>