in

Artificial Intelligence Highlights from FTC’s 2024 PrivacyCon

Artificial Intelligence Highlights from FTC’s 2024 PrivacyCon


This is the second post in a two-part series on PrivacyCon’s key-takeaways for healthcare organizations. The first post focused on healthcare privacy issues.[1] This post focuses on insights and considerations relating to the use of Artificial Intelligence (“AI”) in healthcare. In the AI segment of the event, the Federal Trade Commission (“FTC”) covered: (1) privacy themes; (2) considerations for Large Language Models (“LLMs”); and (3) AI functionality.

AI Privacy Themes

The first presentation within the segment highlighted a study involving more than 10,000 participants and gauged their concerns around the intersection of AI and privacy.[2] The study uncovered four privacy themes: (1) data is at risk (the potential for misuse); (2) data is highly personal (it can be used to develop personal insights, manipulate or influence people); (3) data is often collected without awareness and meaningful consent; and (4) concern for surveillance and use by government. The presentation focused on how these themes should be addressed (and risks mitigated). For example, AI cannot function without data, yet the volume of data inevitably attracts threat-actors. Developers and stakeholders will need to responsibly develop AI and tailor it to security regulations. Obtaining data-subject consent and transparency are critical.

Privacy, Security, and Safety Considerations for LLMs

The second presentation discussed how LLM platforms are beginning to offer plug in ecosystems allowing for the expansion of third-party service applications.[3] While the third-party service applications enhance the functionality of the LLMs, such as ChatGPT, security, privacy, and safety are concerns that would need to be addressed. Due to ambiguities and imprecisions between the coding languages of the third-party applications and LLM platforms, these AI services are being offered to the public for use without addressing systemic issues of privacy, security, and safety.

See also  Maternity Protection: Help for Anticipating Workers

The study created a framework to see how the stakeholders of the LLM platform, users and applications, can implement adversarial actions and attack each other. The study findings described that attacks can occur by: (1) hijacking the system by directing the LLM to behave a certain way; (2) hijacking the third-party application; or (3) harvesting the user data that is collected by the LLM. The takeaway from this presentation is that developers of LLM platforms need to emphasize and focus on security, privacy, and safety when creating these platforms to enhance the user experience. Further, once strong security policies are enacted, LLM platforms should clearly state and enforce those guidelines.

AI Functionality

The last presentation focused on AI functionality.[4] A study was conducted of an AI technology tool that served as an example of the fallacy of AI functionality. The fallacy of AI functionality is a psychological basis that leads individuals to trust the AI technology at face value under the assumption the AI works, all the while overlooking its lack of data validation. Consumers tend to assume the AI functionality and data output is correct, when it might not be. When AI is used in healthcare, this can lead to misdiagnosis and misinterpretation. Therefore, when deploying AI technology, it is important to provide validation data to ensure AI is providing accurate results. In the healthcare industry there are standards for data validation that have yet to be applied to AI. AI should not be exempt from the same level of validation analysis to determine whether the tool reaches the category of clinical grade. This study emphasizes the importance of the recent Transparency Rule (HT-1) which helps facilitate validation data and transparency.[5]

See also  How to set an HR Budget for Small Businesses

The study demonstrated that without underlying transparency and validation data, users struggle to evaluate the results provided by the AI technology. Overall, it is important going forward to validate AI technology to correctly classify and categorize it, allowing users to judge what value to attribute to the AI’s results.

As the development and deployment of AI grows, healthcare organizations need to be prepared. Healthcare organization leadership should establish committees and task forces to oversee AI governance and compliance and address the myriad issues that arise out of the use of AI in a healthcare setting. Such oversight can help address the complex challenges and ethical considerations that surround the use of AI in healthcare and help facilitate responsible AI development with privacy in mind, while keeping ethical considerations at the forefront. The AI segment of FTC’s PrivacyCon helped raise awareness around some of these issues, reminding about the importance of transparency, consent, validation and security. Overall, the presentation takeaways underscore the multifaceted challenges and considerations that arise with the integration of AI technologies in healthcare.

FOOTNOTES

[1] Carolyn Metnick and Carolyn Young, SheppardMullin Healthcare Law Blog, Healthcare Highlights from FTC’s 2024 PrivacyCon (Apr. 5, 2024).

[2] Aaron Sedley, Allison Woodruff, Celestina Cornejo, Ellie S. Jin, Kurt Thomas, Lisa Hayes, Patrick G. Kelley, and Yongwei Yang, “There will be less privacy of course”: How and why people in 10 countries expect AI will affect privacy in the future.

[3] Franziska Roesner, Tadayoshi Kohno, and Umar Iqbal, LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI’s ChatGPT Plugins.

See also  Open Enrollment Guide: Everything You Should Know About OEP 2024

[4] Batul A. Yawer, Julie Liss, and Visar Berisha, Scientific Reports, Reliability and validity of a widely-available AI tool for assessment of stress based on speech (2023).

[5] U.S. Department of Health and Human Services, HHS Finalizes Rule to Advance Health IT Interoperability and Algorithm Transparency (Dec. 13, 2023). See also Carolyn Metnick and Michael Sutton, Sheppard Mullin’s Eye on Privacy Blog, Out in the Open: HHS’s New AI Transparency Rule (Mar. 21, 2024).


#Artificial #Intelligence #Highlights #FTCs #PrivacyCon

Source link

What do you think?

Written by HealthMatters

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

Ep220: Discovering the Candy Spot with Autoimmune Illness & Train with Andrea Wool

Ep220: Discovering the Candy Spot with Autoimmune Illness & Train with Andrea Wool

Behind the Science: Indigenous practices at WISE Girls’s Faculty

Behind the Science: Indigenous practices at WISE Girls’s Faculty