Type to search

10 Threats That The Use Of AI Poses For Companies And Organizations Leadership

10 Threats That The Use Of AI Poses For Companies And Organizations

AI microprocessor on motherboard computer circuit

News this week that Microsoft will include AI capabilities on the taskbar of Bing on its Windows 11 operating system shows how quickly the emerging technology is becoming available for businesses and individuals alike.

“With the update, the AI tool will be accessible from the Windows search box, which allows users to directly access files, and settings and perform web queries,” CNN reported.

But how, when, and why business leaders employ AI technology has the potential to create a wide range of crises for their companies and organizations.

Elon Musk, for one, appears to have his doubts—if not concerns—about AI. Speaking yesterday to investors about Tesla’s plans, he said, “AI stresses me out,” and that the federal government should regulate the technology.

“We need some kind of, like, regulatory authority or something overseeing AI development,” Musk said. “Make sure it’s operating in the public interest. It’s quite dangerous technology. I fear I may have done some things to accelerate it.”

“As AI advances, companies will face a number of legal and business risks, first, from bad actors leveraging these technologies to harm businesses and, second, when businesses themselves wish to implement chatbots or other forms of AI into their functions,” Matthew F. Ferraro, a cybersecurity attorney with the law firm Wilmer Hale, said via email.

“This is a quickly evolving technology, but based on what we know so far, there are ten legal and business risks entities of all kinds should be aware of,” he warned.

Ferraro said there are major damages and risks for using AI in the following areas that business leaders should keep in mind in the days ahead.

“Information entered into AI, including the prompts that ask the chatbots questions, should be considered public, not private. This is because chatbots may use the data they ingest to develop and improve their services.

“Businesses should be wary of entering into chatbot prompts information from clients, customers or partners that is subject to contractual confidentiality limitations or other controls.”

“Chatbots can create malware and impersonate human-like conversations that can be used for phishing and social media scams.”

“AI tools can collect personal data, which may be shared with third parties, potentially violating data privacy laws.”

“The use of AI, including chatbots, to mislead consumers may violate laws prohibiting unfair and deceptive trade practices.”

“The outputs of AI can be biased against certain groups because of the biased nature of the data on which AI tools are trained. U.S. regulators, including the White House and the Federal Trade Commission, emphasize using AI responsibly and in a nondiscriminatory manner.”

“Chatbots can create false information at scale in believable, conversational English, even if the user is not as fluent.”

“Companies should ensure their use of AI aligns with ethical obligations, particularly those imposed by professional organizations. For instance, the use of AI to represent clients may constitute the unauthorized practice of law.”

“Government contracts require candor on the part of contractors. Contractors should be transparent about the use of AI to avoid misleading the government.”

“The use of generative AI poses many different kinds of intellectual property risks. The material fed into the AI may be protected by rightsholders. And it is generally unsettled who owns the IP generated by AI. Litigation in this area is ongoing.”

“Chatbots may produce false but authoritative-sounding statements. Businesses using chatbots should validate the responses before relying on them for business decisions.”

Before fully embracing AI and exposing their organizations to these and other crisis triggers, executives should consider moving forward with this new technology tool in a careful and cautious manner.

Taking Precautionary Steps

AI microprocessor on motherboard computer circuit

“Businesses can prepare for AI by taking precautionary steps,” Charlotte Adams, chief people officer at Overnight Jobs, a job-searching and career advice website, said in a statement.

“First, they need to identify the areas of their business where AI can be most effective and beneficial and develop a clear strategy for integrating AI into those areas,” she advised.

“They also need to ensure that their data infrastructure is capable of supporting AI initiatives by investing in technologies like cloud computing, data analytics, and machine learning tools. They should start small by implementing AI in a few areas of the business and gradually expanding its use over time,” Adams counseled.

“They need to ensure that their employees are trained to work alongside AI technologies and understand how AI can be used to complement their work. Finally, they need to be aware of the ethical implications of AI and develop policies and guidelines that ensure the responsible use of AI in their business operations.

“By taking these steps, businesses can prepare for the integration of AI and reap the benefits of increased efficiency, productivity, and innovation,” she concluded.

Ferraro said there are steps that companies can take to help minimize the dangers and threats of using AI.

“Be wary of the pell-mell use of AI by business employees. Put policies in place to protect the confidentiality of contractually protected information,” he recommended. “Comply with data privacy regulations and options that allow users to have their information deleted or protected from being shared with third parties, [and] be transparent with customers, employees, and partners about the use of AI or chatbots.”

Ferraro said corporate executives should “Plan for disinformation dangers proactively, communicate company messages, monitor corporate brands online, and be prepared to respond in the case of an incident.”