Generative AI, like other types of AI, has the potential to impact workforces, energy consumption, data privacy, security, and political impact. Many new business risks, including plagiarism, copyright violations, harmful content, misinformation, and hallucinations, may also be brought about by GenAI technology. Businesses may also need to deal with the possibility of worker displacement and a lack of transparency.
According to Tad Roselund, managing director and senior partner at consulting firm BCG, “many of the risks posed by generative AI are enhanced and more concerning than those .” A thorough strategy, sound governance, and a dedication to responsible AI are all necessary to mitigate those risks.
The following 11 issues should be taken into account by corporate cultures that use GenAI:
- The dissemination of damaging content
When humans provide text prompts, generative AI systems can automatically produce content. According to Bret Greenstein, generative AI leader and partner at professional services firm PwC, “these systems can generate enormous productivity improvements, but they can also be used for harm, either intentional or unintentional.” For instance, an AI-generated email sent on the company’s behalf might unintentionally use offensive language or give employees bad advice. Greenstein suggested that in order to guarantee that content satisfies the company’s ethical standards and upholds its brand values, GenAI should be utilised to supplement human beings and procedures rather than to replace them.
- Legal exposure and copyright
Large image and text databases from various sources, including the internet, are used to train well-known generative AI tools. The source of the data may be unknown when these tools produce images or lines of code, which could be problematic for a pharmaceutical company that relies on a formula for a complex molecule in a drug or a bank that handles financial transactions. If a company’s product is based on the intellectual property of another company, there could be significant financial and reputational risks. “Companies must look to validate outputs from the models,” Roselund stated, “until legal precedents provide clarity around IP and copyright challenges.”
- Generative AI Evolution
Large language models (LLMs) in generative AI are trained on datasets that may contain personally identifiable information (PII) about specific people. Sometimes a straightforward text prompt can elicit this information.
Additionally, customers may find it more difficult to find and request the removal of the information than they would with traditional search engines. To comply with privacy regulations, businesses that develop or modify LLMs must make sure that PII is not incorporated into the language models and that removing PII from them is simple.
Large language models (LLMs) in generative AI are trained on datasets that may contain personally identifiable information (PII) about specific people. Sometimes a straightforward text prompt can elicit this information.
Additionally, customers may find it more difficult to find and request the removal of the information than they would with traditional search engines. To comply with privacy regulations, businesses that develop or modify LLMs must make sure that PII is not incorporated into the language models and that removing PII from them is simple.
- Disclosure of sensitive information
GenAI is democratising and increasing the accessibility of AI capabilities. According to Roselund, the confluence of accessibility and democratisation may result in a consumer brand unintentionally revealing its product strategy to a third party or a medical researcher sharing private patient data. Unintentional events like these could have serious legal repercussions and permanently damage patient or customer trust. Roselund advised businesses to establish top-down, transparent policies, governance, and efficient communication, highlighting shared accountability for protecting IP, protected data, and sensitive information.
- Amplification of preexisting prejudice
Existing bias may be exacerbated by generative AI. For instance, businesses that use LLMs for particular applications may not have control over bias in the data used to train these language models. According to Greenstein, AI companies must have a diverse leadership team and subject matter experts to help them spot bias in data and models.
- Roles and morale of employees
According to Greenstein, AI is being trained to perform more of the routine duties performed by knowledge workers, such as writing, coding, content creation, summarising, and analysing. The pace of worker displacement and replacement has accelerated due to advancements in generative AI technologies, even though these processes have been going on since the first AI and automation tools were introduced.
implemented. “The future of work itself is changing,” Greenstein stated, “and the most ethical companies are investing in this”.
Investing in training specific segments of the workforce for the new roles generated by generative AI applications has been one ethical response. For instance, companies will need to assist staff in learning generative AI techniques like prompt engineering. According to Nick Kramer, vice president of applied solutions at consulting firm SSA & Company, “the truly existential ethical challenge for adoption of generative AI is its impact on organisational design, work, and ultimately on individual workers.” “This will not only minimise the negative impacts, but it will also prepare the companies for growth.”
- Provenance of data
Massive amounts of data that may be poorly managed, of dubious origin, used without consent, or biased are consumed by GenAI systems. The AI systems themselves or social influencers may increase the degree of inaccuracy
“The corpus of data a generative AI system uses and its provenance determines its accuracy,” said Scott Zoldi, chief analytics officer at FICO, a company that provides credit scoring services. “ChatGPT-4 is mining the internet for data, and a lot of it is truly garbage, presenting a basic accuracy problem on answers to questions to which we don’t know the answer,” Zoldi claims that FICO has been simulating edge cases to train fraud detection algorithms using generative AI for over ten years. Zoldi’s team is aware of the permitted uses of the generated data because it is consistently marked as synthetic data. “We treat it as walled-off data for test and simulation only,” he stated.
“In the future, the model is not informed by synthetic data generated by generative AI. We do not permit this generative asset to be “out in the wild.”
- Inability to interpret and explain
According to Zoldi, many generative AI systems use a probabilistic approach to group facts together, which is reminiscent of how AI has learnt to link data elements together. But when using apps like ChatGPT, these details aren’t always visible. As a result, the reliability of the data is questioned.
Analysts hope to arrive at a causal explanation for the results when they examine GenAI. However, generative AI and machine learning models look for correlations rather than causality. “That’s where we humans need to insist on model interpretability — the reason why the model gave the answer it did,” Zoldi stated. “And truly understand if an answer is a plausible explanation versus taking the outcome at face value.”
GenAI systems shouldn’t be trusted to deliver answers that have the potential to drastically impact lives and livelihoods until that degree of reliability can be attained.
- Hallucinations caused by AI
To extract patterns and produce content, generative AI approaches all employ different combinations of algorithms, such as autoregressive models, autoencoders, and other machine learning algorithms. Even though these models excel at spotting novel patterns, they occasionally have trouble highlighting crucial differences that are related to real-world use cases.
For example, writing authoritative-sounding but erroneous prose or drawing realistic-looking images but distorted human figures with extra fingers or eyes are examples of this. These mistakes with language models can manifest as chatbots that misrepresent company policies. For example, an Air Canada chatbot misrepresented company policies about bereavement benefits. Attorneys who have used these tools to file briefs citing fictitious court cases have also been fined.
These problems can be lessened by using more recent methods like agentic AI frameworks and retrieval augmented generation. To prevent negative reactions from customers, penalties, or other issues, it is crucial to keep humans informed to confirm the accuracy of generative AI data.
- The carbon footprint
Larger AI models can produce better results, according to many AI vendors. This is partially accurate, but either training new AI models or executing AI inference procedures in production can frequently require significantly more data centre resources. The problem isn’t exactly straightforward. Some contend that enhancing an AI model that could lower a worker’s carbon footprint while commuting to work or increase a product’s efficiency could be beneficial. On the other hand, creating that model might make environmental issues like global warming worse.
- The influence on politics
The topic of GenAI technologies’ political effects is delicate. On the one hand, the world might be a better place with improved GenAI tools. However, they could also make it possible for different political actors—authoritarians, politicians, and voters—to worsen communities. Social media platforms that use algorithms to generate or promote divisive comments in an effort to boost engagement for their owners over comments that find common ground but may not have the same click-through and sharing numbers are one example of how generative AI is hurting politics.
As societies determine which GenAI use cases serve the public good and whether that should be the ultimate goal, these problems will remain complex for years to come.