With regulation in its early stages, few hard-and-fast rules exist around how companies develop and deploy artificial intelligence, machine learning and large language models. Cybersecurity expert Scott Allendevaux calls for the public and legislators to press for robust data privacy protections.
Whistleblowers from within the cloistered artificial intelligence community are sounding alarms. For example, a group is raising legitimate concerns about a culture of recklessness and secrecy at OpenAI, a leader in the field.
The threat is so great, in their estimation, that those speaking out are sacrificing monetary and stock awards that come with defying their non-disparagement clauses. These brave individuals who sounded the alarm are owed a debt of gratitude for raising serious transparency concerns about the companies that will reshape the future — and, possibly, not in good ways without strong ethical backbones and dedication to standards that assure things meant to be private remain private.
It is incumbent for all in the industry, from the C-suite down to the junior staff to be on constant watch for dirty deeds and not be afraid to speak out much in the way this fearless cohort is.
As a data security expert, the admonitions only serve to raise my wariness about AI’s privacy threats, known and unknown. And as more privacy breaches are disclosed on the back end, in this case Google, the time is right for more stringent privacy rules and regulations in an evolving world where AI is poised to change the landscape.
The lines between public and private or consensual and non-consensual data use are increasingly blurred. Frameworks need to adapt to the unique challenges posed by AI advancements. The rapid development of AI systems like large language models (LLMs) and generative AI bring profound implications for data privacy. Personal and sensitive information is often used without proper consent, posing significant risks to our rights and freedoms.
AI is transforming our world in ways that were once the realm of science fiction. If Isaac Asimov were still alive, he’d say, “Exactly as I predicted.” LLMs powering AI systems are capable of generating human-like responses, answering complex questions and performing tasks unimaginable a couple of years ago. We used to talk about the Turing test, but we’re way past that now. Most of us can’t tell whether we’re talking to a human or AI at the other end of a text prompt.
We’re about there with natural conversation as well. AI systems ingest huge amounts of data, which raises profound implications for data privacy. Data, often personal and sensitive, feeds into these LLMs, posing significant risks to our rights and freedoms. Legal frameworks designed to protect privacy are struggling to keep pace with AI advancements.
As States Continue to Act on Data Privacy, Big Tech Should Play Its Role, Too
Laws are being copy+pasted from coast to coast
Read moreImagine a teenager posting an Instagram thread about her struggles and anxieties. That post can be scraped and included in AI training data, leading to scenarios where sensitive information is exposed. This highlights the privacy risks and ethical dilemmas posed by the use of personal information in ways never intended by the individuals who created it. We need to talk about the core issues surrounding AI and data privacy. Those involved in AI’s front lines should be having these discussions daily. Should we be adapting legal frameworks? Will those developing the infrastructure establish a respectable and reviewable fabric to protect individuals?
AI’s insatiable appetite for extensive personal data to feed its machine-learning algorithms has raised serious concerns about data storage, usage and access. Public datasets, web pages, articles, social media sites and other text-rich sources help models develop a robust understanding of language. But the data that feeds into these models raises privacy concerns because it includes personal and sensitive information often without proper consent. Training data for GPT-4, for example, is vast and varied, incorporating public and proprietary content. This raises legal challenges because once data is transformed and integrated into the model, it becomes difficult to trace it back to the original source, raising questions of ownership and consent.
Unintentional data inclusions also present risks. Personal emails, medical records and other documents can accidentally become part of training datasets, leading to privacy breaches. Cross-border data flows present jurisdictional challenges, as data saved in one country may be subject to different privacy laws in another. Evaluating existing data privacy frameworks is crucial. AI must comply with regulations like GDPR, CCPA and HIPAA, but we need clear and specific AI regulations to address unique aspects like data provenance, model transparency and ethical considerations. Enhanced consent frameworks and global harmonization of terms and laws are essential.
Data provenance, ensuring transparency about data sources and maintaining a detailed record of data lineage is vital. It creates accountability in AI systems. Current regulations fall short in mandating comprehensive data provenance practices, making it difficult to trace and verify the origins of data. Addressing bias and fairness is another critical issue. AI systems can perpetuate biases present in the training data, leading to unfair and discriminatory output. If bias goes into the model, discrimination can flow out, resulting in unfairness. Existing privacy laws aren’t written to address this, so we need regulations that ensure companies are doing audits and applying corrective measures to identify and eliminate bias. This isn’t a one-and-done thing; it should be a regular practice, especially when changing LLMs.
Enhanced consent and data minimization are also crucial. Traditional consent can be impractical for large-scale data aggregators. People may give consent for one purpose but not intend for their data to be used for another. This brings about the idea of abuse monitoring. In regulated environments like finance and investment, you have fraud detection mandates. AI needs similar mechanisms. For example, AI might refuse to participate in certain tasks if it detects potential abuse. Fraud detection and ethical guidelines are essential. Ethical guidelines and industry standards are being developed.
Trusting tech giants to police themselves is unreliable. The idea of being your own policeman hardly ever works. Absent other courageous whistleblowers, we need laws. The public and legislators need to be aware of these issues. AI is complex, and it may be given short shrift in discussions. If AI enterprises object to third-parties establishing hard and fast guardrails, it is time for them to establish and adhere to a set of uniform standards that will reassure the public their private data won’t be breached as information is vacuumed up en masse with little regard of the long-range implications on data privacy.
Readers should understand that while AI offers exciting advancements, it also necessitates, short of industry enforced discipline, a robust legal and regulatory framework to protect individual privacy. Enhanced consent frameworks, data provenance and regular audits for bias and fairness are essential. The industry, the public and legislators must prioritize these issues to ensure AI development respects and safeguards personal data.