Data Privacy Day 2024: Part 2

With Data Privacy Day coming up, we spoke to cybersecurity industry experts about the latest data privacy trends…

Alex Hazell, Head of Legal and Privacy for Acxiom EMEA, says:

“In 2024, we can no longer discuss data privacy without talking about AI. Up until now, the overarching challenge for businesses has been that pre-existing legislations around privacy and intellectual property were not written with newly released AI technologies in mind.

“While we’ve made significant progress in reaching some form of omnibus regulation through the upcoming EU AI Act, it’s important to remember that effective AI regulation extends beyond safeguarding personal data, copyright and IP infringement issues. Addressing biases and ensuring fairness in AI systems is also crucial for building ethical and equitable applications that do not pose a threat to individuals or society. 

“People are increasingly aware of data privacy issues, and there is no doubt that AI regulation and governance will continue evolving to ensure everyone is being kept safe. However, organisations must make sure they are putting the right measures in place now to develop ethical frameworks, responsible deployment practices, and inclusive decision-making processes – measures that will protect individual privacy and cultivate a trustworthy and responsible AI ecosystem.”

 

Donnie MacColl, Senior Director of Technical Support and DPO at Fortra, advises:

“Tips for keeping your data secure on Data Protection Day (or during Data Protection Week) 

Set aside an hour, grab a coffee, sit down, and complete the following:  

  • Change your passwords on all your banking and shopping apps, work systems, and so on – keep them safe in a password manager app
  • Set up multi-factor authentication on everything that lets you
  • Sign up to review your credit score (using ClearScore or similar, which is free)
  • Review your bank account and end any direct debits, standing orders, or recurring payments that are no longer needed

Remember, the smaller your personal data footprint, the lower the chance of fraud.” 

 

Carla Roncato, Vice President of Identity, WatchGuard Technologies:

“Advances in artificial intelligence (AI) and machine learning (ML) technologies are top of mind this Data Privacy Day, both for the potential benefits and troubling dangers these tools could unleash. Considering the widespread proliferation of AI tools in just this past year, it’s critical that we in the information security community seize this opportunity to raise awareness and deepen understanding of the emerging risk of AI for our data. As AI becomes a more integral – and infringing – presence in our everyday lives it will have real implications to our data rights.

Remember, if a service you use is “free,” it’s likely that you and your data are the product. This also applies to AI tools, so act accordingly. Many early AI services and tools, including ChatGPT, employ a usage model that’s similar to social media services like Facebook and TikTok. While you don’t pay money to use those platforms, you are compensating them through the sharing of your private data, which these companies leverage and monetise through ad targeting. Similarly, a free AI service can collect data from your devices and store your prompts, then use that data to train its own model. While this may not seem malicious, it’s precisely why it’s so crucial to analyse the privacy implications of processing scraped data to train generative AI algorithms. Say one of these companies gets breached; threat actors could obtain access to your data, and – just like that – have the power to weaponize it against you. 

Of course, AI has potential upsides. In fact, many AI tools are quite powerful and can be used securely with proper precautions. The risks your business faces depend on your specific organisation’s missions, needs and the data you use. In security, everything starts with policy, meaning that ultimately you must craft an AI policy that’s tailored to your organisation’s unique use case. Once you have your policy nailed down, the next step is to communicate it, as well as the risks associated with AI tools, to your workforce. But it’s important to continue to revise or amend this policy as needed to ensure compliance amid changing regulations – and be sure to reiterate it with your workforce regularly.”

 

Mike Loukides, Vice President of Emerging Tech at O’Reilly:

“How do you protect your data from AI? After all, people type all sorts of things into their ChatGPT prompts. What happens after they hit “send”?  

“It’s very hard to say. While criminals haven’t yet taken a significant interest in stealing data through AI, the important word is “yet.” Cybercriminals have certainly noticed that AI is becoming more and more entrenched in our corporate landscapes. AI models have huge vulnerabilities, and those vulnerabilities are very difficult (perhaps impossible) to fix. If you upload your business plan or your company financials to ChatGPT to work on a report, is there a chance that they will “escape” to a hostile attacker? Unfortunately, yes. That chance isn’t large, but it’s not zero.  

“So here are a few quick guidelines to be safe:  

  • Read the fine print of your AI provider’s policies. OpenAI claims that they will not use enterprise customers’ data to train their models. That doesn’t protect you from hostile attacks that might leak your data, but it’s a big step forward. Other providers will eventually be forced to offer similar protections.  
  • Don’t say anything to an AI that you wouldn’t want leaked. In the early days of the Internet, we said “don’t say anything online that you wouldn’t say in public.” That rule still applies on the Web, and it definitely applies to AI.  
  • Understand that there are alternatives to the big AI-as-a-service providers (OpenAI, Microsoft, Google, and a few others). It’s possible to run several open source models entirely on your laptop; no cloud, no Internet required once you’ve downloaded the software. The performance of these models isn’t quite the equal of the latest GPT, but it’s impressive. Llamafile is the easiest way to run a model locally. Give it a try.  

“I’m not suggesting that anyone refrain from using AI. So far, the chances of your private data escaping are small. But it is a risk. Understand the risk, and act accordingly.” 

 

Dr Andrew Bolster, senior manager, research and development at Synopsys Software Integrity Group:

“Data Privacy Week 2024 is a fantastic opportunity for data owners, business leaders and importantly, consumers, to reflect on the seismic changes to the data privacy landscape over the past year.

The transformational factor of 2023 was, of course, the explosion of Generative AI onto the world. Over the course of the year, we were bombarded with more and more innovative and amazing examples of content generated by these systems but were also increasingly educated in the often cavaliere way these systems (like Large Language Models) were greedily ‘trained’ on every piece of human generated content imaginable. From our family photos on social media, to fan-fiction stories from niche internet forums, a range of organisations grabbed all the ‘publicly available’ data they could from the internet with abandon.

Given the computational expense and significant investment that these models represented, companies like Google, OpenAI, Anthropic, and others have been applying these LLMs to every possible use to claw back the billions of compute-hours of time that went into them, but a couple of critical questions emerged over the course of the year; if these models were trained on any data that was made available to them, could that same ‘content’ be expressed or otherwise extracted from the models after the fact, and if so, what potential licensing pollution might this represent for businesses operating these models?

Over the past decade, society has become more and more comfortable (or perhaps, complacent)  in trading data about our private lives, preferences, thoughts, and experiences, to get everything from tailored shopping and streaming media recommendations to cheaper insurance premiums or customised healthcare coverage. However, LLMs and other Generative AI systems are now not only optimizing the content and products we consume, but constructing or hallucinating them from whole cloth, with often blurry lines in terms of creative ownership of that content.

Although 2023 might have been the year of Generative AI, 2024 may be the year where fundamental questions about the nature of content authorship, ownership, and commercialisation come into crystalline focus, and that uncertainty presents challenges to both data producers (including the general public) as well as service providers leveraging these LLM tools to serve business needs. Much the way that today, application security is becoming more and more focused on understanding and securing the software supply chain, data-driven, LLM powered systems may need to consider the origin and provenance of the data used to both train and operates these systems in future.”

 

The post Data Privacy Day 2024: Part 2 first appeared on IT Security Guru.

The post Data Privacy Day 2024: Part 2 appeared first on IT Security Guru.