In the contemporary digital landscape, artificial intelligence (AI) can serve as both a guardian and a threat to personal data.

AI technology holds the promise of enhanced security and efficiency. The proliferation of generative AI presents unique challenges that can compromise sensitive personal information. Not surprisingly, the McKinsey Technology Trends Outlook of July 2024 named generative AI at the top of 15 notable trends—with a spike of almost 700% in Google searches from 2022 to 2023, with applied AI and industrialized machine learning also named—but digital trust and cybersecurity is not far behind. This article explores the double-edged sword, the blessing and the curse, so common to the use of AI, in the context of personal data: 1) cybersecurity and the ransomware threat; 2) how AI can safeguard personal data, with innovative solutions that improve data protection; 3) AI and interoperability; 4) the risks, challenges, and positives associated with AI in protecting personal data; and finally, 5) the importance of creating a culture that understands these risks while exploring ways to mitigate them.

I: The Problem: From Cyberthreats to Ransomware

On Aug. 29, 2024, a number of government agencies issued a joint cybersecurity advisory to warn about known ransomware being used by RansomHub—which, since its inception in February 2024, has encrypted and exfiltrated data from more than 210 entities in various industries and sectors. The full extent of the damage is still ongoing. While many types of organizations have been impacted, the health care industry faces unique challenges driven by the need for digital connectivity, exposing the industry to be targeted by cybercriminals. High-profile incidents like the Change Healthcare attack underscore why the health care industry is one of the most lucrative targets for cybercriminals, stemming from the critical nature of health care operations and the value the data can fetch—either from the willingness of health care organizations to pay ransoms, or on the dark web.

The Department of Health and Human Services (HHS) is particularly committed to improving cybersecurity in the health care sector, stemming from the dedicated efforts of the Cybersecurity Task Force established under Section 405(d) of the Cybersecurity Act of 2015. Through the 405(d) Program, the Health Sector Coordinating Council Cybersecurity Working Group continues to assist in developing tools to make sure that the health care and public health sector has the tools to address new and evolving cyberthreats. In July, HHS announced that it was reorganizing roles and functions within the department to transfer oversight of cybersecurity, technology, data and AI policy and strategy from the Office of the National Coordinator for Health Information Technology and the Assistant Secretary for Strategic Preparedness and Response to a newly formed Office of the Assistant Secretary for Technology and Policy/Office of the National Coordinator (ASTP/ONC). This reorganization will make it possible for HHS to focus on methods to continuously monitor, detect, hunt and mitigate the toll that cyberthreats take on health care organizations. In the August joint advisory, HHS joined with the Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), and the Multi-State Information Sharing and Analysis Center (MS-ISAC) to encourage network defenders to implement the following recommendations to reduce the likelihood and mitigate the impact of ransomware incidents: 

    • Implementing a recovery plan to maintain and retain multiple copies of sensitive or proprietary data;
    • Requiring accounts with password logins to comply with National Institute for Standards and Technology (NIST) standards for developing and managing password policies;
    • Keeping operating systems, software, and firmware up to date;
    • Requiring phishing-resistant multifactor authentication to administrator accounts;
    • Segmenting networks to prevent the spread of ransomware; 
    • Identifying, detecting, and investigating abnormal activity and potential transversal of the indicated ransomware with a networking monitoring tool;
    • Installing, regularly updating, and enabling real-time detection for antivirus software on all hosts;
    • Reviewing domain controllers, servers, workstations, and active directories for new and/or underground accounts.

II: A Solution: AI as the Guardian of Personal Data

Can AI help safeguard personal data? An April 2024 Survey from the Cloud Security Alliance found that AI is already transforming cybersecurity, with 67% of respondents stating that they have tested AI specifically for cybersecurity purposes and with more than half planning to adopt AI solutions within the next year. AI can significantly enhance personal data security by utilizing sophisticated algorithms capable of identifying and mitigating potential threats. Here are a few examples of how AI can protect personal data:

  • Anomaly Detection: AI systems use machine learning algorithms to analyze vast amounts of data in real-time, detecting unusual patterns that may indicate a data breach. These datasets reveal patterns that define typical behavior, signaling "business as usual." Any significant deviation is considered an anomaly, indicating potential issues. A credit card transaction that does not align with a user's past behavior, for example, is considered an anomaly. In network security, unusual traffic patterns may suggest an intrusion attempt. 
  • Predictive Analytics: By leveraging historical data, AI can predict potential security threats before they occur. For instance, IBM's Watson has been used to analyze data from numerous sources, including social media and previous cyber-incidents, to forecast possible vulnerabilities and inform preventative measures. An AI platform for business is now marketed as watsonx.ai
  • Behavioral Analysis: AI can monitor user behavior to identify when someone might be acting outside of their normal range of activities, which can suggest compromised accounts. By flagging these anomalies, organizations can take immediate action to secure affected accounts before further damage occurs.

Other Innovations in AI for Data Protection: As organizations grapple with the complexities of data safeguarding, several innovative AI-driven approaches have emerged:

  • Synthetic Data Generation: To alleviate concerns surrounding the risks of using sensitive personal data, synthetic data creation has gained traction as a way to protect personal data. As explained by Mostly AI, synthetic data is created by generative AI models trained on "real world" data. The resulting synthetic data resembles the original data but does not contain any personal information. This synthetic data can then be used for training AI models, testing systems, or research without exposing sensitive details. While synthetic data can afford an alternative, employing encryption techniques, anonymization and deidentification and/or differential privacy to protect proper data storage and transfer are critical to adequate privacy protection. A CGI blog notes that by using synthetic data, "organizations and researchers can perform extensive analyses and develop AI models without the risks and limitations associated with using real data that is confidential and/or sensitive."
  • AI-Driven Access Controls: AI applications can enforce strict access controls based on current threat assessments and user context, ensuring that personal data is only available to those who require it. For instance, platforms like Okta utilize machine learning to adjust security and authentication policies dynamically, considering the user's behavior and environment.
  • Real-Time Data Protection Monitoring: Companies are utilizing AI tools for continuous monitoring of data flows to identify vulnerabilities in real-time. Such applications can alert organizations to potential data leaks or unauthorized access attempts, allowing for immediate response measures.

III. The Interoperability Issue 

Starting with the authorities under the HITECH Act of 2009—which tied financial incentives to the adoption of certified electronic health IT by eligible hospitals and providers—HHS has been working to improve the electronic exchange of health data. Interoperability is intended to break down data silos. While it may appear, at first glance, to compromise data privacy, it is interoperability that will provide the foundation and opportunity for standardizing health data, improving its accuracy, and ensuring that this data is up to date. It is on the foundation of that data layer that trustworthy machine learning tools and AI can be designed, for example, to accelerate time-to-diagnosis and insight into accurate treatment. Generative AI technologies can be leveraged to accelerate innovations and fuel new discoveries while keeping health and life sciences data secure and private. While interoperability will be helpful to providing patients with access to services, it can also contribute meaningful data inputs through which large learning models will allow patients to engage taking an interest in wellness activities outside and between encounters with providers to avoid costly hospitalizations. Greater interoperability could also mean, that when changes to a record are wrongfully made, such a change would be easily recognized. 

Before AI was even on the horizon, we recognized that developing an interoperable national health information network could also help to reduce health care fraud—using capabilities such as abnormal pattern recognition in claims, system audits, practice pattern monitoring and more. The principle that interoperability can provide a platform to implement real-time anti-fraud controls remains true. Our work was cited, for example, in the Center for Medicare & Medicaid Services' 2020 Final Rule on Interoperability and Patient Access for Medicare Advantage Organization and Medicaid Managed Care Plans, State Medicaid Agencies, CHIP Agencies and CHIP Managed Care Entities, Issuers of Qualified Health Plans on the Federally Facilitated Exchanges, and Health Care Providers. In 2024, Kenneth D. Mandl, Daniel Gottlieb & Joshua C. Mandel concur in an article they wrote in Nature stating that "the emergence of generative artificial intelligence amplifies the demand for high-quality, current healthcare data." "The integration of AI into health care will require diverse electronic health information, easily extracted by EHRs, for use by innovators in developing models across industry, academia and government." 

IV: The Risks and Challenges of AI

Employee Error and Complacency: Despite its advantages, AI can inadvertently foster a sense of complacency among employees—a phenomenon often referred to as "automation boredom." As employees grow accustomed to relying on AI systems, they may let their guard down, leading to unintentional data breaches and vulnerabilities. Personnel may become overly reliant on automated systems, failing to question AI decisions or protocols. This type of complacency can result in employees inadvertently sharing personal information without verifying whether it is appropriate or necessary. Too, routine interactions with AI can lead employees to neglect best practices in data security, such as regularly updating passwords or verifying requester identities. The Verizon 2024 Data Breach Investigations Report noted that "roughly one-third of all breaches involved ransomware or some other extortion technique"; "68 [percent] of breaches involved a human element," and that "28 [percent] of breaches involved errors," which "validates our suspicion that errors are more prevalent than the media or traditional incident response-driven bias would lead us to believe." Solutions Review also noted that the human factor is a double-edged sword—vital to an organization's cyberoffense, but also the weak link in its defenses.

Cybercriminals Use AI, Too: Cybercriminals often exploit individuals' trust to implement phishing schemes and social engineering attacks. For example, they might impersonate a legitimate AI system, prompting employees to provide sensitive information under false pretenses. 

California's proposed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which passed the State Assembly at the end of August, recognizes the criminal element, noting that "[i]f not properly subject to human controls, future development in artificial intelligence may also have the potential to be used to create novel threats to public safety and security, including by enabling the creation and the proliferation of weapons of mass destruction, such as biological, chemical, and nuclear weapons, as well as weapons with cyber-offensive capabilities." The legislation accordingly requires developers of covered models to comply with certain requirements including the capability to promptly enact a full shutdown and to implement a written, separate safety and security protocol.

Yet SB 1047 did not get past the governor, and others aren't panicking just yet. "From our perspective, the threat actors might well be experimenting and trying to come up with Gen AI solutions to their problems," the Verizon report states. "But it really doesn't look like a breakthrough is imminent or that any attack-side optimizations this might bring would even register on the incident response side of things. The only exception here has to do with the clear advancements on deepfake-like technology, which has already created a good deal of reported fraud and misinformation anecdotes." 

V. More Positives

Of course, the California legislation recognizes AI's benefits: "Artificial intelligence, including new advances in generative artificial intelligence, has the potential to catalyze innovation and the rapid development of a wide range of benefits for Californians and the California economy, including advances in medicine, wildfire forecasting and prevention, and climate science, and to push the bounds of human creativity and capacity." At the federal level, bills continue to make their way to the Senate and the House—from a bill authorizing the director to identify challenges and award competitive prizes for artificial intelligence research and development" (H.R. 9475) to one directing the National Institute of Standards and Technology to evaluate emerging practices and norms relating to AI systems (H.R. 9466)—including those relating to transparency, security, safety, privacy, reliability, accountability, and more. While there will undoubtedly be tweaks, revisions, successes and failures—does the disclosure of security practices used in the development of an AI system itself create a threat?—keeping abreast of the latest developments will be critical.

Just recently, in September 2024, the Mount Sinai Health System and IBM Research announced a study designed to leverage AI advances by using behavioral data from clinical data, smartphone data, and cognitive testing to predict outcomes such as treatment discontinuation, hospitalizations, and emergency room visits for young people seeking mental health evaluation and treatment. The study invites patients to have audio and visual recordings made of clinical visits in order to assess "spoken language, eye contact, and facial expressions from both the patient and clinician"—which raises interesting questions. Assuming the data is free of the dangers of discrimination or bias surrounding mental health predictions, what might AI detect from a smile or a smirk that could help young people later on?

Conclusion

The advancement of AI in data protection presents a paradox—while it serves as a powerful tool for safeguarding personal data, it simultaneously introduces risks that must be carefully managed. Organizations must strike a balance by harnessing the benefits of AI while remaining vigilant against employee complacency and the exploitation of automated systems. By adopting innovative AI-driven solutions and cultivating a strong culture of data security awareness among employees, businesses can leverage AI to protect personal data effectively, transforming what could be a curse into a lasting blessing.

* * * *

This article was written with the assistance of Epstein Becker Green staff attorney Ann W. Parks.

Reprinted with permission from the October 2, 2024, edition of the “New York Law Journal" © 2024 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.

Jump to Page

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.