Navigating the Risks of Generative AI: Regulation, Shadow AI, and Legal Challenges

The same improved multimodal capabilities and decreased barriers to entry that democratize consumption also create new opportunities for abuse — the easier it is to view, access or act on media, so too does it become trivial for bad actors to produce deepfakes (better yet AVS-1GSequence insertion), abet privacy concerns and perpetuate / find loop holes in bias as well evade CAPTCHA protections. Social media was swamped by pornographic deepfake celebrity videos in January 2024, yet research from as early of May the previous year had already shown that ten times more voice-based deepfakes were circulating than over the same month during 2022. [6]

This could potentially act as a headwind to adoption, or at least the acceleration of such more short-medium term. Any large, irreversible investment into an emerging technology or practice is inherently risky as new legislation and changing political headwinds could require significant re-tooling — even potentially make them illegal in the coming years.

But the AI regulation, known as the Artificial Intelligence Act (link resides outside ibm.com), was provisionally agreed by EU members in December 2023. com). The EOA also bans things like scraping images for facial recognition databases without consent, using or selling face biometrics that are used to identify a person in sensitive contexts (race/gender/sex etc) or constructing biased systems based on these aspects of human identity; the use of AI-powered “social scoring” and threat that could lead serious economic harm purely through an algorithmic decision. It also aims to establish a category of “high-risk” AI systems — such as those with the potential to endanger safety, fundamental rights or rule of law compliance (e.g., by removing human control) which are subject additional oversight requirements. The directive also lays out some transparency obligations for what it frames as “general-purpose AI (GPAI)” systems -- that means foundational models, including on technical documentation and systemic adversarial testing.

On the flip side, while several crucial players, such as Mistral, are located within the EU, the vast majority of such groundbreaking AI work is being done in America, where legally binding regulation of AI in the private sector may necessitate a congressional reaction, which may be improbable in an election year. The Biden administration released a broader executive order on October 30th that mandates federal agencies to fulfill 150 prerequisites to utilize AI technology; several months earlier, the administration had motivated prominent AI laboratories to accept self-sustained assurances to stay within trust and protection guardrails. Both California and Colorado are actively working to create their own free-standing legal frameworks around individual’s digital integrity when it happens to artificial intelligence. China also has taken more aggressive steps to put specific restraints on AI, which includes prohibiting social media recommendation algorithms from using price discrimination and mandating AI-generated images to be labelled as well. With regulative bills on creative AI on the way, LLM learning data and model output must be “verified”. According to some reports, this indicates that LLM productivity may be censored by experts. Only time would tell. In the meantime, various businesses and investors are reviving the discussion about the nature of creative AI capabilities. observes the rise of such incentivized creators in LLM development. Meanwhile, the usage of copyrighted materials to create training data for AI which is being sued for content development is a hot button issue. The News vs OpenAI trial’s conclusion will have a big impact on AI legislation futureye, but won’t be the last in a battle that many claim is terrifying.

Artificial intelligence shadows (and AI corporate policies)

On top of this increasingly treacherous landscape is the broad availability and ease-of-use associated with generative AI tools—which only multiplies an organisation's potential consequences from a legal, regulatory, economic or reputational standpoint. Organisations need to have a considered, consistent and well defined corporate view of what is generative AI use that everyone can understand and equally importantly an eye on shadow AI; the personal using of ‘AI’ in the workplace (even if it may not strictly hit current definitions) — unauthorised by management.

Shadow AI (also called "shadow IT" or even “BYOAI”) Shadow AI is when unsatisfied employees that want answers right now, likely faster than a careful company policy can create them process generative text without the required confirmation from people in charge. Generative AI tools can now be improvised even by non-technical individuals through a number of consumer facing services, and some are free to use. And in an Ernst & Young Study, 90 percent of respondents report they now use AI on the job. [7]

This kind of enterprising spirit can be good on its own – but the eager employees might not have all relevant information or perspective related to security, privacy and compliance. This can place huge amounts of threat on corporations. For instance, an employee might be feeding trade secrets to a public-facing AI model that constantly trains on user input without them knowing, or use copyrighted material for training in the case of proprietary models whose purpose is content generation which can get their company sued.

As with most works in progress, this serves as a reminder that the dangers of advanced generative AI amplify almost perfectly linearly to how good it is. As Your Momma would say, Great Power with comes responsibility.

Share on


You may also like