Skip to content

Right To Know - August 2024, Vol. 20

August 13, 2024

Cyber, Privacy, and Technology Report

 

Welcome to your monthly rundown of all things cyber, privacy, and technology, where we highlight all the happenings you may have missed.

View previous issues and sign up to receive future newsletters by email here. 

 

State Action:  

  • Texas AG Obtains $1.4 Billion Settlement Against Meta: The Texas Attorney General secured a $1.4 billion settlement, the largest ever settlement obtained by a single State, in a lawsuit against Meta over allegations that the company captured and used the personal biometric data of millions of Texans without authorization by law. This lawsuit is a result of the “Tag Suggestions” feature Facebook rolled out in 2011 suggesting the names of people in a photo based on facial recognition software. The Texas Attorney General alleged that Meta allowed this feature to collect biometric data for over 10 years in violation of Texas’s “Capture or Use of Biometric Identifier” Act (“CUBI”) and the Deceptive Trade Practices Act.
  • Pennsylvania Updates Data Breach Notice Law: On June 28, 2024, Governor Josh Shapiro signed a law that amends the 2005 Pennsylvania Breach of Personal Information Notification Act, and becomes effective on September 26, 2024. For breaches involving more than 500 affected Pennsylvania residents, the amendments require (1) notice to the Pennsylvania Attorney General at the same time that notice is sent to affected residents (excluding entities covered by the Pennsylvania insurance data security law, (2) notice to consumer reporting agencies (reduced from more than 1,000 residents), and (3) for defined kinds of personal information, assumption of all costs and fees of providing a credit report and credit monitoring for 12 months. The amendments also limit “medical information” that can qualify as “personal information” to “medical information in the possession of a state agency or state agency contractor.”
  • Governor Pritzker Signs Illinois’ BIPA Amendment: On August 2, 2024, Governor Pritzker signed a bill amending Illinois’ Biometric Information Privacy Act. The amendments provide two key changes to the law.  First, the amendments explicitly provide that consent for the collection, capture, dissemination, etc. of biometric identifiers can be obtained through an electronic signature.  Second, and of importance for companies, the amendments reverse the Illinois Supreme Court’s ruling that violations accrue on a “per scan” or “per dissemination” basis.  Instead, under the amendments, (a) the collection, capture, purchase, receipt or obtaining a the same biometric identifier from the same person via the same method of collection constitutes only one violation, regardless if it happened more than once, and (b) the disclosure, redisclosure, or dissemination of the same biometric identifier of the same person to the same recipient using the same method of collection constitutes only one violation regardless of the number of times the information was disclosed, redisclosed or disseminated.  Among other open questions is whether these amendments will apply retroactively.

Regulatory:  

  • NYDFS Issues Circular Regulating the Use of Artificial Intelligence in Insurance Underwriting and Pricing: On July 11, 2024, the New York Department of Financial Services (“NYDFS”) published a circular regulating the way that that insurers use Artificial Intelligence Systems (“AIS”) and External Consumer Data and Information Sources (“ECDIS”) in underwriting and pricing functions. The circular mandates an assessment of whether an underwriting or pricing guideline derived from either ECDIS or AIS is unfairly or unlawfully discriminatory. The circular also sets forth governance and risk management guidelines and disclosure obligations for insurers using ECDIS and AIS.
  • HHS OCR Announces Third Ransomware Settlement: On July 1, 2024, the U.S. Department of Health and Human Services’ (HHS) Office for Civil Rights (OCR) announced a $950,000 settlement and three-year corrective action plan with Heritage Valley Health System (HVHS), a comprehensive health care provider for residents of Pennsylvania, eastern Ohio, and the panhandle of West Virginia. The settlement follows an October 31, 2017 investigation of HVHS by HHS OCR for potential violations of the Health Insurance Portability and Accountability Act (HIPAA) Security Rule following a ransomware attack. HHS OCR’s investigation alleged that amongst other things, HVHS failed to conduct an accurate and thorough risk analysis of the potential risks and vulnerabilities to the confidentiality, integrity, and availability of electronic protected health information (ePHI). As part of the corrective action plan, HVHS will implement a risk management plan to address and mitigate security risks and vulnerabilities identified in their risk analysis, develop, and revise written policies and procedures to comply with HIPAA, and train their workforce on the HIPAA policies and procedures. 
  • The FTC’s Final Health Breach Notification Rule Includes Health and Wellness Applications: The Federal Trade Commission’s (“FTC’s”) final rule expanding the scope of its existing Health Breach Notification Rule to include health and wellness applications went into effect on July 29, 2024. When the Health Breach Notification Rule was enacted, it required personal health record vendors to inform individual consumers and the FTC whenever identifiable health information was inappropriately disclosed in a security incident. Since then, applications and consumer health technologies, such as fitness trackers and wearable blood pressure monitors, have become widely adopted. The Health Breach Notification Final Rule broadens its’ scope of applicability to include the “unauthorized acquisition of unsecured PHR identifiable health information in a personal health record” stemming from either a “data breach” or “unauthorized disclosure.” The final rule also expands the definition of a “covered health care provider” to include any entity not covered by HIPAA that furnishes health care services or supplies. Developers of health applications and similar technologies must take note that any individually identifiable health information their health applications and consumer health technologies collect or use could constitute identifiable health information covered by the Health Breach Notification Final Rule.
  • Messaging App Settles Lawsuit for $5 million, Banned from Offering App to Anyone Under 18: NGL Labs, LLC, was the subject of a lawsuit alleging that its app falsely claimed that its AI content moderation program filtered out cyberbullying and other harmful messages. NGL was also accused of actively marketing its service to children and teens, and sending fake messages that appeared to come from real people to trick them into signing up for paid services. NGL marketed its app as a “safe space for teens” and claimed it used “world class AI content moderation” to combat cyberbullying and other harms. Under the parties’ stipulation (upon which the court entered a final order on July 14, 2024) NGL, among other things, was prohibited from offering anonymous messaging apps to users under 18 years old, prohibited from collecting personal information from children, required to delete information about children that was previously collected, and required to pay $4.5 million to the FTC and $500,000 to the State of California.

Litigation & Enforcement:

  • 4th Circuit Finds No Expectation Of Privacy In Location History Over Limited Time Period: On July 9, the United States Court of Appeals for the Fourth Circuit affirmed the denial of a motion to suppress evidence finding that the defendant did not have a reasonable expectation of privacy in the two-hours of “Location History” data obtained by law enforcement from Google. The case arose from the increasingly common use of “geofence” warrants by law enforcement.  The defendant argued that the search violated his Fourth Amendment rights because it “violated his reasonable expectation of privacy in his location information,” and the warrant lacked probable cause and was not sufficiently particular.  The Fourth Circuit disagreed.  Pointing to the limited nature of the time frame in the geofence warrant, and the fact that the defendant voluntarily provided his “Location History” to Google by turning that feature on in his phone, among other facts, the court found that the defendant had no reasonable expectation of privacy and that the government “did not conduct a search when it obtained this information from Google.”
  • Meta’s And Snap’s Use of User Data For Business Purposes Removes SCA Protection In Criminal Case: In a case where he was charged with murder, the defendant served subpoenas on Snap, Inc. and Meta for the contents of a third-party’s social media accounts. Meta and Snap claimed that the trial court’s order denying their motions to quash was improper because good cause did not exist for production of the material, and the production violated the Stored Communications Act (SCA).  The California Appellate Court first found there was sufficient support for the trial court’s finding of good cause.  Turning to the SCA, the court first reviewed the relevant statutory definitions and exceptions.  The court noted that, if a user’s account is set to “public,” that would satisfy the user consent exception to the SCA.  The court then found that Meta’s and Snap’s use of user data for business purposes (e.g., providing personalized advertising) removed Meta and Snap from the definitions of covered entities under the SCA — since the data was not held by Meta and Snap solely for “facilitating communications or storing the content as backup for the user.”  The court did note, however, that this did not mean Meta and Snap were free to simply disclose user data as they wished as other state laws and the platforms’ terms of service may also limit disclosure.
  • Biometric Data Privacy Case Against Amazon and Starbucks Mostly Dismissed: In Mallouk et al v. Amazon and Starbucks (W.D. Wash.), plaintiffs filed a putative class action against Amazon and Starbucks claiming they illegally tracked their biometric information in 2023 in a collaboration between Amazon and Starbucks using Amazon’s “Just Walk Out” technology, specifically palm scans. Plaintiffs sought statutory damages, among other categories. Plaintiffs brought claims under New York City’s Biometric Identifier Information Law, NYC Admin. Code § 22-1202(a) and (b), which among other things require disclosure if biometric information is stored or collected and prohibits the sale of biometric information to third parties, as well as unjust enrichment claims. On July 23, 2024, U.S. District Court Judge Ricardo Martinez dismissed the disclosure claim against Amazon because the plaintiffs failed to provide pre-suit notice of the alleged disclosure violation. The Court also dismissed the claim against Amazon and Starbucks premised on the sale of biometric information because the plaintiffs only alleged that Amazon’s palm scanning devices allow Amazon to link biometric information to other information about customers, enabling Amazon to make more targeted marking decisions – the Court found this theory was too attenuated and did not constitute profit from a biometric transaction itself. It wasn’t a complete victory though, as the Court did permit one plaintiff’s unjust enrichment claim to stand on the theory that if he had known Amazon was collecting biometric identifier information, he would not have entered the store or purchased anything from the store, or would not have paid as much for the items he purchased (a “benefit of the bargain” damages theory, over which there is significant litigation in the class action context).
  • CA Announce Settlement With Mobile Gaming App Company Over Illegal Collection of Children’s Data: Tilting Point Media, a mobile gaming app company, resolved allegations from the CA AG’s office and the LA City Attorney that the company had illegally collected children’s information in violation of the California Consumer Privacy Act (CCPA) and the federal Children’s Online Privacy Protection Act (COPPA) through their “SpongeBob: Krusty Cook-Off” gaming app. To resolve the issue, Tilting Point paid $500,000 in civil penalties and agreed to comply with an injunction ensuring lawful collection and disclosure of information from children.

International Updates:

  • The South Korean Data Protection Authority Publishes Guidelines on Personal Data Protection Law for Foreign Entities: The South Korean Data Protection Authority, the Personal Information Protection Commission of South Korea (“PIPC”), published guidelines to assist foreign entities in South Korea in complying with South Korean personal data protection law. The guidelines provide clarity on the main legal provisions in force in South Korea, and decisions by the PIPC and the Korean courts in relation matters of personal data protection. The aim of the guidelines is to encourage foreign entities to adopt vigorous data protection practices in order to protect South Korean citizens.
  • Ireland’s National Cyber Security Centre Lacking Resources and Personnel: Ireland’s National Cyber Security Centre (“NCSC”) was established in 2015 to monitor, detect and respond to cyber security incidents in the Irish State, and build resilience in IT systems. According to the Irish Independent, a confidential internal review has identified issues within NCSC which may prevent the organization from carrying out its obligations. The NCSC appears to be struggling to keep up with workflow due depleted morale within the organization, a lack of Government support, too few resources and no effective communication with other state agencies. The review has made several recommendations to the NCSC to remedy the issues identified.
  • Irish Firms “Not Ready for Next Big Cyber-Attack”: The Cyber Skills university collaboration in Ireland has warned that most Irish businesses are not ready for the NIS2 EU Directive on proactive cybersecurity for certain businesses. This law is due to take effect in mid-October and applies to critical utilities as well as a range of third-party businesses who provide goods or services.  The scope has expanded significantly since the first directive and is likely to catch many businesses unaware, despite the large fines that can be levied under the directive.  The announcement comes as many businesses are yet to recover fully from the Crowdstrike outage.
  • The European Commission Announces In Preliminary Findings That Meta’s “Pay or Consent” Model Does Not Comply With the Digital Markets Act: On July 1, 2024, the European Commission (EC) informed Meta of its preliminary findings that its “pay or consent” advertising model fails to comply with the Digital Markets Act (DMA). Under the DMA, gatekeepers cannot make use of the service or certain functionalities conditional on users’ consent. In November 2023, Meta introduced a binary “pay or consent” offer whereby EU users of Facebook and Instagram had to decide between paying for a monthly fee-based subscription to an ads-free version of Facebook and Instagram or access to a free version of these social networks with personalized ads. The EC’s preliminary view is that Meta’s “pay or consent” advertising model does not comply with the DMA because it does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the free version with personalized ads. The EC also noted that Meta’s “pay or consent” model does not allow users to exercise their right to freely consent to the combination of their personal data. The EC has until March 25, 2025 to conclude its investigation, and if its preliminary views are confirmed, the EC will decide that Meta’s “pay or consent” model does not comply with the DMA.

Industry Updates:  

  • CISA Acknowledges IT Outages Due to CrowdStrike Update and Provides Guidance: On July 21, 2024, the cybersecurity firm CrowdStrike pushed a faulty update to its endpoint detection and response tool known as Falcon. This update caused approximately 8.5 million Windows devices to become inoperable. This caused banks, airlines, and other industries to grind to a halt before the issue was identified and remediated. CrowdStrike confirmed that this was not caused by a malicious actor. CISA’s alert timeline noted that as of July 26, CrowdStrike had listed reports of threat actors leveraging the outage.
  • WhatsApp for Windows Vulnerability Allows Malicious Scripts to be Run by Bad Actors: The popular messaging app WhatsApp has approximately 100M monthly users worldwide. The app allows users to connect with friends and family securely and privately by encrypting messages and calls. While there are layers of security built into the app, the recent discovery of a vulnerability in the WhatsApp for Windows application leaves room for exploitation by bad actors. WhatsApp allows users to send various attachments and file types as messages, certain file types are automatically blocked via the app’s automatic spam detection. However, one user discovered that the app does not restrict users from sharing all malicious file types. For example, Python and PHP file types are currently unrestricted, and the files can be executed without warning. This leaves room for the mass deployment of malicious scripts in the event that a user’s account is taken over by a bad actor.
  • Dark Angels Ransomware Group Makes Record: According to the ThreatLabz 2024 Ransomware Report, ransomware attacks have increased measurably year by year with healthcare and technology industries being hit the hardest. There are over 391 ransomware gangs in existence, and the Dark Angels are among the newer of the group, popping up on the radar around 2022, quickly making a name for themselves. According to the report, Dark Angels broke the record of highest ransom paid after attacking an undisclosed victim, and demanding and being paid a $75M ransom, surpassing the previous record of $40M.
  • VMWare Vulnerability Allows Admin Rights to Threat Actor Groups: Microsoft recently discovered a vulnerability in ESXi hypervisors being exploited by ransomware groups. This allows the ransomware groups full administrative permission on domain-joined hypervisors. ESXi is installed directly onto a physical server and provides direct access and control to underlying resources. If a ransomware group exploits this vulnerability and gains full administrative permission to an ESXi hypervisor, then the ransomware group can encrypt the file system and access hosted virtual machines. VMware has released a security update, and ESXi server administrators are recommended to apply the updates.
  • Two Russian Nationals Plead Guilty for Participating in the LockBit Ransomware Group: Two Russian nationals have pleaded guilty to participating in the LockBit ransomware group, which has deployed attacks against companies in the United States. One of the individuals pleaded guilty to conspiracy to commit computer fraud and abuse, and conspiracy to commit wire fraud. He faces a maximum penalty of 25 years in prison. The other individual pleaded guilty to conspiracy to commit computer fraud and abuse, intentional damage to a protected computer, transmission of a threat in relation to damaging a protected computer, and conspiracy to commit wire fraud. He faces a maximum penalty of 45 years in prison. Sentencing has not been set yet, but will be determined by a federal district court judge based on the US Sentencing Guidelines and statutory factors.
  • Center for Internet Security Publishes How Cyber Threat Actors Can Leverage Generative AI: The Center for Internet Security has published An Examination of How Cyber Threat Actors Can Leverage Generative AI Platforms. The explosive growth of Generative Artificial Intelligence (GenAI), following the release of ChatGPT last year, has provided network defenders with new tools for defense, but has also provided cyber threat actors with tools for attacks. This report explains how defenders can improve defenses by understanding how threat actors are leveraging GenAI platforms. It includes recommendations of providing social engineering and phishing training, incorporating the latest findings and trends from research into GenAI, implementing a standardized protocol for handling suspicious emails, considering what personal and professional information is posted publicly, providing training for staff on how to recognize deepfake videos, challenging users to spot suspicious visual cues, exercising increased vigilance for unusual asks, such as large wire transfers, or the submission or modification of user credentials, and when in doubt, use simple tests.
  • NIST Publishes Risk Management Framework Quick-Start Guide for Small Enterprises: On July 23, 2024, the National Institute of Science and Technology (NIST) announced the publication of the NIST Risk Management Framework (RMF) Small Enterprise Quick Start Guide. Cybersecurity and privacy risk management are critical for businesses and organizations of all sizes. The Guide provides information for small enterprises on understanding, designing, and implementing risk management programs in accordance with NIST SP 800-37 Rev. 2, Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy (December 2018). The RMF and Guide include a 7-step process for risk management, including Prepare, Categorize, Select, Implement, Assess, Authorize, and Monitor.
  • Flaw in Android Telegram Allows Malware Distribution via Videos: ESET Researchers found a zero-day exploit in Telegram for Android that could allow attackers to share malicious Android malware by way of Telegram channels, groups, and chats and have them appear as multimedia files. While a patch has been issued, users of Telegram who have not patched are still vulnerable to this threat.
  • NIST Final Versions of Safe AI Development Guidance: The U.S. National Institute of Standards (NIST) recently released several items pertaining to AI help safely and securely develop and manage AI. One of the documents is guidance intended to “help software developers mitigate the risk stemming from generative AI and dual-use foundation models.” Another is a platform designed to help AI users and developers measure how certain types of attacks can degrade the performance of an AI system. Two additional releases contain guidance to help manage the risks of generative AI, and the final release proposes a plan for US stakeholders to work with others around the world on AI standards. The last three were released in their final versions.

This publication is intended for general informational purposes only and does not constitute legal advice or a solicitation to provide legal services. The information in this publication is not intended to create, and receipt of it does not constitute, a lawyer-client relationship. Readers should not act upon this information without seeking professional legal counsel. The views and opinions expressed herein represent those of the individual author only and are not necessarily the views of Clark Hill PLC. Although we attempt to ensure that postings on our website are complete, accurate, and up to date, we assume no responsibility for their completeness, accuracy, or timeliness.

Subscribe For The Latest

Subscribe