Artificial intelligence (AI) and generative AI technologies have ushered in a new era of possibilities, redefining how software operates. These advancements offer unprecedented opportunities for increased productivity, innovative problem-solving, and generating vast amounts of relevant information at scale. However, as the adoption of generative AI becomes more widespread, so do concerns surrounding data privacy and ethical considerations.

In the dynamic landscape of AI, compliance with evolving regulations and safeguarding data privacy is a paramount challenge. AI, while capable of augmenting human capabilities, should not replace human oversight, particularly in a world where AI regulations are still evolving on a global scale.

One of the inherent risks of unchecked generative AI lies in the inadvertent disclosure of proprietary information. Companies that feed sensitive proprietary data into public AI models risk exposing their valuable assets. To mitigate this risk, some companies opt to localize AI models on their systems and train them using their proprietary data. However, such an approach demands a well-organized data infrastructure for optimal results.

IP protections and copyright concerns

Companies utilizing AI-generated content may inadvertently infringe upon the intellectual property rights of third parties. This can lead to legal disputes and potential ramifications. To address this issue, some companies, like Adobe with Adobe Firefly, offer indemnification for content generated by their large language models (LLMs). Nonetheless, copyright-related challenges will need careful resolution as AI systems continue to “reuse” third-party intellectual property.

Personal data security

AI systems must handle personal data, particularly sensitive or special category personal data, with the utmost care. The increasing integration of marketing and customer data into LLMs raises concerns about inadvertent data leaks, potentially resulting in data privacy breaches.

Contractual violations

Using customer data in AI applications can sometimes violate contractual agreements, leading to legal consequences. As businesses embrace AI, they must navigate this complex terrain carefully, ensuring they comply with contractual obligations.

Customer transparency and disclosure

Current and potential future regulations often focus on transparency and proper disclosure of AI technology. For instance, when a customer interacts with a chatbot on a support website, the company must indicate whether an AI or a human is handling the interaction. Compliance with such regulations is critical to maintaining trust and adhering to ethical standards.

AI’s rapid evolution has outpaced the development of legal frameworks. Companies cannot afford to wait for absolute clarity in the legal landscape as competitors forge ahead. Instead, they must implement time-tested risk reduction strategies based on current regulations and legal precedents to minimize potential issues.

Legal challenges and AI giants

Recent lawsuits targeting AI giants underscore the importance of responsible data handling. These lawsuits, including class action cases involving copyright infringement, consumer protection, and data protection violations, emphasize the need for rigorous data governance and transparency. They also hint at potential requirements for disclosing AI training data sources.

Implications for businesses

Businesses, not just AI creators like OpenAI, face significant risks when heavily relying on AI models. Illegal training of AI models can contaminate entire products, as demonstrated by the case of the app Every. When the Federal Trade Commission (FTC) charged Every with deceiving consumers about using facial recognition technology and data retention, the parent company Everalbum had to delete improperly collected data and AI models, leading to its shutdown in 2020.

Global regulations and the EU AI Act

States like New York are introducing laws and proposals to regulate AI use in hiring and chatbot disclosure areas. The EU AI Act, currently in Trilogue negotiations, is expected to pass soon. It would require companies to transparently disclose AI-generated content, ensure content legality, publish summaries of copyrighted data used for training, and establish additional requirements for high-risk use cases.

Best practices for mitigating AI risks

Despite the evolving legal landscape, CEOs are under pressure to embrace generative AI tools to boost productivity. Companies can use existing laws and frameworks as a guide to establish best practices and prepare for future regulations. Existing data protection laws contain provisions applicable to AI systems, including requirements for transparency, notice, and protection of personal privacy rights. Implementing best practices includes:

Transparency and documentation: Clearly communicate AI usage, document AI logic, intended applications, and potential impacts on data subjects.

Localizing AI models: Internal localization and training with proprietary data can reduce data security risks and improve productivity by ensuring models are trained on relevant, organization-specific information.

Starting small and experimenting: Use internal AI models to experiment before transitioning to live business data from secure cloud or on-premises environments.

Discovering and connecting: Harness generative AI to uncover new insights and forge unexpected connections across departments and information silos.

Preserving the human element: Gen AI should augment human performance, not replace it entirely. Human oversight, critical decision review, and verification of AI-created content are vital to mitigate model biases and data inaccuracies.

Maintaining transparency and logs: Capture data transactions and detailed logs of personal data processing to demonstrate proper governance and data security if needed.

Embracing the potential of AI

With AI technologies like Claude, ChatGPT, BARD, and Llama, businesses have the opportunity to capitalize on the wealth of data collected over the years. While change brings risks, lawyers and privacy professionals must prepare for this transformative wave. Robust data governance, clear notification practices, and detailed documentation are essential to navigate the evolving legal landscape and harness the vast potential of AI while safeguarding privacy and compliance.