Thursday, August 21, 2025

SIM Card Swapping: How Criminals Can Steal Your Phone Number (and What You Can Do About It)


 This type of fraud can happen to anyone, and the results can be devastating—criminals can steal your phone number, take over your accounts, and even access your bank or cryptocurrency funds.

Most people know to watch out for suspicious emails, but there’s another scam that doesn’t get enough attention: SIM card swapping.  

What Is SIM Card Swapping?

Your phone number lives on the little chip inside your phone called a SIM card. If a criminal can convince your phone company to move your number to a SIM card they control, they gain access to your calls and text messages. That may not sound like much, but here’s why it matters:

  •  Many companies send login codes and password resets by text.
  • Once scammers receive those codes, they can break into your accounts.
  • This can lead to stolen money, stolen identity, and a huge headache to recover control.

How It Happens

Scammers gather personal details about you first—like your address, birthday, or account information. Then:

  1. They call your mobile carrier pretending to be you.
  2. They claim their phone is lost or broken and ask for a new SIM card.
  3. The carrier activates that new SIM card.
  4. Suddenly, your phone stops working, and the scammer now has your number.

How to Know If Your SIM Card Was Swapped

  • Your phone suddenly has no service for calls or texts.
  • Friends or family say they tried calling you but someone else answered, or the call didn’t go through.
  • You get alerts from your bank, email, or other accounts about password changes you didn’t request.

If any of these happen, contact your carrier immediately.

How to Protect Yourself

The good news is that you can take a few simple steps to make it harder for scammers:

  1. Add a PIN or password to your mobile account – This makes it harder for someone to impersonate you.
  2. Use app-based login codes instead of text messages – Apps like Google Authenticator or Authy generate codes right on your phone, which can’t be stolen through SIM swapping.
  3. Keep personal info private – The less criminals know about you, the harder it is for them to convince your carrier they are you.
  4. Monitor your accounts – Turn on alerts for suspicious logins or money transfers.

Set Up Extra Protection with Your Carrier

Here’s how you can add a SIM lock or account PIN with the major carriers:

Final Takeaway

SIM card swapping isn’t about hacking your phone—it’s about tricking your phone company. But with a few minutes of setup, you can make your account much harder to fake.

It is like adding a deadbolt to your front door. Most burglars won’t bother if it looks like too much work.

Take time today to implement SIM locking and add that PIN or password with your carrier. It could save you a lot of stress, money, and time down the road.

Tuesday, August 19, 2025

The Dark Side of Leadership and Accountability: The Ethics of Plausible Deniability

Plausible Deniability

Plausible deniability is the capacity of an individual—often a senior official or leader—to credibly deny knowledge of or responsibility for illicit or unethical actions carried out by subordinates or associates, due to a lack of direct evidence linking them to those actions. The denial remains "plausible" because the circumstances or absence of proof prevent conclusive attribution, even if the individual was involved or willfully ignorant. This practice often involves deliberately structuring relationships and communications to ensure deniability, enabling those in authority to escape blame or legal consequences if activities are exposed.

Examples

Creating plausible deniability in the context of information technology and digital forensics may involve technical mechanisms or strategies that enable a person to credibly deny knowledge of, or control over, certain data or actions. Examples include:

  • Deniable Encrypted File Systems: Software such as VeraCrypt enables users to create hidden encrypted volumes within other encrypted containers. If compelled to reveal a password, a user can provide access to the “outer” volume while denying the existence of the “hidden” one. The existence of the hidden volume typically cannot be proven through standard forensic methods if configured correctly.
  • Hidden Operating Systems: VeraCrypt also supports the creation of a hidden OS within an encrypted partition. If a device is seized, the user can provide credentials for the decoy OS while maintaining plausible deniability about the hidden OS. Forensic detection becomes difficult if the hidden OS leaves no traces outside its partition.
  • Deniable Communication Protocols: Messaging solutions like Signal employ deniable authentication. Even if a transcript of communications is captured, it may be difficult for a third party to decrypt and prove who authored or participated in a conversation.
  • Anonymous Accounts: The creation and use of anonymous online accounts and pseudonymous email addresses allow users to plausibly deny authorship or control of content, as nothing is directly tied to their real identity if all technical precautions are maintained.
  • Obfuscation and Metadata Removal: Removing or falsifying metadata from documents, images, or other digital evidence can make attribution of authorship or origin difficult, supporting plausible deniability for content creators or transmitters.

These methods can be used to protect privacy and sensitive data, but they can also be abused to frustrate investigations and provide cover for illicit activity.

Challenging and Disproving Denials

Plausible deniability can be legally challenged or disproved in some situations, particularly when there is sufficient evidence to show that a person in authority did, in fact, have knowledge of or involvement in the questionable actions. Common scenarios in which plausible deniability fails or is overcome in court include:

  • Direct or Circumstantial Evidence: If investigators or prosecutors uncover direct evidence (such as emails, messages, recorded conversations, or documents) tying the individual to the actions, deniability collapses. Even strong circumstantial evidence can establish knowledge or intent, undermining plausible deniability.
  • Command Responsibility Doctrine: In military, law enforcement, or organizational contexts, leaders can be held legally responsible for the actions of subordinates if they knew or should have known about illegal acts and failed to prevent or punish them. Plausible deniability is not a defense if it can be shown that an official intentionally remained ignorant or deliberately failed to supervise.
  • Willful Blindness: Courts may challenge claims of plausible deniability if they find that a person “deliberately avoided” acquiring knowledge, a doctrine known as willful blindness. A person cannot escape liability simply by intentionally avoiding learning about potentially illegal activities.
  • Patterns of Conduct: Repeated patterns of behavior, communication, or organizational structure can indicate a deliberate attempt to insulate higher-ups from information while still enabling or authorizing misconduct.
  • Pleading Standards in Civil Cases: Under modern pleading standards (see Twombly, Iqbal), allegations must be plausible, not just possible. If a plaintiff presents enough factual content to allow an inference that the defendant was aware or involved, plausible deniability can be challenged at the motion to dismiss stage.
  • Legal Precedents: In cases such as Ashcroft v. Iqbal, the U.S. Supreme Court addressed whether defendants could be held liable if they were aware of subordinates’ actions, even if they denied direct involvement. The courts look for factual allegations that make liability plausible, not just possible.

Ashcroft v. Iqbal

In Ashcroft v. Iqbal (2009), the U.S. Supreme Court addressed what makes government officials’ liability claims “plausible” rather than merely possible. Javaid Iqbal alleged that officials, including former Attorney General Ashcroft and FBI Director Mueller, discriminated against him after 9/11. Iqbal lost at the Supreme Court. The Court held that Iqbal’s complaint did not contain enough specific factual content to plausibly suggest that Ashcroft and Mueller personally adopted discriminatory detention policies after 9/11. The Court found that Iqbal’s claims were based mostly on general accusations and lacked specific factual content tying Ashcroft and Mueller to unconstitutional conduct. The Court ruled that plausible deniability could hold if a complaint alleges only that high-level officials “knew of, condoned, and willfully and maliciously agreed to subject” someone to abuse “as a matter of policy.” The complaint must contain enough factual content to plausibly suggest a direct link, not just a possible inference, to overcome denials and survive a motion to dismiss. Plausible deniability is therefore protected unless the assertions are substantiated by facts allowing a reasonable inference of personal responsibility.

Summary

In summary, plausible deniability is not absolute—legal systems have developed doctrines and standards (such as command responsibility, willful blindness, and specific pleading requirements) to pierce denials when sufficient evidence exists that a person knew of or participated in the conduct in question.

Associated Resources

  • Studies discussing paradoxical leadership behavior, which often touches on the dark side of leadership and accountability, appear in peer-reviewed leadership and organizational psychology publications. See: Lee, A., Lyubovnikova, J., Zheng, Y., & Li, Z. F. (2023). Paradoxical leadership: A meta-analytical review. Frontiers in Organizational Psychology, 1, 1229543. https://doi.org/10.3389/forgp.2023.1229543
  • The doctrine of “command responsibility” is an important subject in legal scholarship, especially in international law and military law reviews. See: Chantal Meloni, Command Responsibility: Mode of Liability for the Crimes of Subordinates or Separate Offence of the Superior?, Journal of International Criminal Justice, Volume 5, Issue 3, July 2007, Pages 619–637, https://doi.org/10.1093/jicj/mqm029
  • Foundational legal analysis for willful blindness and command responsibility can be found in discussions of U.S. v. Jewell, Ashcroft v. Iqbal, and Twombly v. Bell Atlantic Corp. See legal journals and Supreme Court case analyses for detailed precedent.

Monday, July 07, 2025

Ubiquitous Technical Surveillance & Countermeasures: Existential Threats & Mitigations

Ubiquitous Technical Surveillance (UTS) is the widespread collection and analysis of data from various sources—ranging from visual and electronic devices to financial and travel records—for the purpose of connecting individuals, events, and locations. 

This surveillance poses risks to government operations, business organizations, and individuals alike, threatening to compromise sensitive investigations, personal privacy, and organizational security. The surprising findings of a recent audit of FBI techniques to address UTS further heighten the need for awareness and response to the threats. 

As the sophistication and reach of surveillance technologies continue to grow, understanding the nature of UTS and implementing effective Technical Surveillance Countermeasures (TSCM) is essential for safeguarding sensitive information and ensuring operational integrity. This work explores UTS and TSCM and suggests mitigation strategies to combat the threats.

Overview

Ubiquitous Technical Surveillance (UTS) refers to the pervasive collection and analysis of data including visual, electronic, financial, travel, and online for the purpose of connecting individuals, events, and locations. The significance of the threats is outlined in a recently declassified but heavily redacted DOJ/OIG audit of the FBI's response to UTS (DOJ, 2025). Based on the number of redactions, particularly from the CIA's section of the report, it is reasonable to imagine that many incidents have occurred that have not been reported to the public.

Technical Surveillance Countermeasures (TSCM) refers to specialized procedures and techniques designed to detect, locate, and neutralize unauthorized surveillance devices and eavesdropping threats. TSCM is commonly known as a "bug sweep" or "electronic counter-surveillance" and is used to protect sensitive information from being intercepted by covert listening devices, hidden cameras, or other forms of technical surveillance (REI, 2025), (Conflict International Limited, 2025).

UTS Devices, Data Sources, & Risks

Technical surveillance data collection can occur through a variety of devices and data sources including the following:

UTS is recognized as a significant and growing threat to government, business organizations, and individuals, with the potential to compromise investigations, business operations, and personal safety. When the collected technical surveillance information is in the wrong hands and used for nefarious purposes, harm can result.

UTS Threats

What are the UTS threats?

  • Significance: Described as an “existential threat” by the Central Intelligence Agency (CIA) due to its ability to compromise sensitive operations and personal safety (DOJ, 2025, p.4).

Risks:

  • Compromise of investigations, personnel PII, and sources (DOJ, 2025)
  • Exposure of operational details
  • Threats to personal and organizational security
  • Corporate espionage (Pinkerton, 2022)

Real-World UTS Scenarios

The following incidents are a sample of situations involving UTS.

  • Cartel Tracking via Phones and Cameras: Criminals exploited mobile phone data and city surveillance cameras to track and intimidate law enforcement and informants (DOJ, 2025, p.18).
  • Organized Crime and Phone Records: Crime groups used call logs and online searches to identify informants (DOJ, 2025, p.18).
  • Financial Metadata De-Anonymization: Commercial entities re-identified individuals from anonymized transaction data. Though this data is anonymized, in 2015, researchers from the Massachusetts Institute of Technology found that with the data from just four transactions, they could positively identify the cardholder 90% of the time. (DOJ, 2025, p.17).
  • Travel Data Correlation: Adversaries used travel records to reveal covert meetings and operational activities (DOJ, 2025, p.1).
  • Online Activity Analysis: Aggregated web and social media data to build detailed personal profiles (DOJ, 2025, p.1).
  • Visual Surveillance: Use of CCTV and smart devices for real-time tracking and event reconstruction.
  • Electronic Device Tracking: Exploitation of device signals and unique identifiers for location tracking.
  • Combined Data Exploitation: Overlaying multiple data sources to establish “patterns of life.”
  • Commercial Data Brokers: Purchase of large datasets for profiling and targeting.
  • Compromised Communications: Poorly secured communications exposing sensitive activities.

UTS Response: Organizational Challenges - FBI

The FBI identified UTS as an issue impacting the Bureau. However, a recently unclassified audit of the FBI's approach to UTS by the Office of Inspector General (OIG) identified several challenges and areas for improvement in the FBI's approach (DOJ, 2025, p.4).

OIG Audit of the FBI's Efforts (DOJ, 2025)

  • Red Team Analysis: Initial FBI efforts were high-level and did not fully address known vulnerabilities.
  • FBI Strategic Planning: Ongoing development, but lacking clear authority and coordination.
  • Training Gaps: Basic UTS training is mandatory for FBI personnel, but advanced training is limited and optional.
  • Incident Response: FBI Data breaches revealed policy gaps and lack of coordinated response.
  • Recommendations: The FBI needs comprehensive vulnerability documentation, strategic planning, clear authority, and expanded training.

Countermeasures & Best Practices

Combating the threats from UTS is a daunting challenge. Several steps can be taken to mitigate the threats.

Scenario-Specific Steps

Suggested General Countermeasures

  • Regular training on digital hygiene and counter-surveillance
  • Encryption of sensitive data and communications
  • Physical security for sensitive locations and devices
  • Vigilance and behavioral adaptation to signs of surveillance
  • Technical Surveillance Countermeasures (REI, 2025), (Conflict International Ltd, 2025), (EyeSpySupply, 2023).

Training & Awareness (DOJ, 2025)

  • Basic UTS Awareness: Should be mandatory for all FBI personnel.
  • Advanced UTS Training: Recommended for high-risk FBI roles; should be expanded and resourced.
  • Continuous Learning: Stay updated on emerging threats and countermeasures.

Incident Response Recommendations from the OIG Audit of the FBI (DOJ, 2025)

  • FBI should establish clear lines of authority for UTS incidents.
  • FBI should develop and rehearse coordinated response plans.
  • FBI should regularly review and update internal controls and policies.

Summary

The growing sophistication and reach of surveillance technologies have made UTS a threat to government operations, business organizations, and individuals. Real-world incidents demonstrate how adversaries exploit mobile phone data, surveillance cameras, financial transactions, and travel records to compromise investigations, expose operational details, and threaten personal and organizational security.

The FBI, recognizing UTS as an existential threat, has faced challenges such as insufficient planning, limited training, and gaps in incident response.

Technical Surveillance Countermeasures (TSCM), including procedures like bug sweeps and electronic counter-surveillance, are tools for detecting and mitigating unauthorized surveillance devices. Best practices for mitigation include regular training, encryption, physical security, and continuous awareness of emerging threats.

Conclusion

The risks posed by UTS are immediate and evolving, with the potential to undermine investigations, compromise privacy, and threaten organizational integrity. Effective countermeasures require a combination of technical solutions, organizational policies, and training. The findings of the OIG audit of the FBI highlight the need for clear authority, coordinated response plans, and regular updates to internal controls. As surveillance technologies continue to advance, adopting a proactive and comprehensive approach to counter-surveillance is important for safeguarding information and maintaining operational security.

References

Conflict International Ltd. (2025, June). Bug Sweeps (TSCM): Protecting Against AirTag Stalking and Modern Surveillance. https://conflictinternational.com/news/bug-sweeps-tscm-protecting-against-airtag-stalking-and-modern-surveillance

DOJ. (2025, June). Audit of the Federal Bureau of Investigation's Efforts to Mitigate the Effects of Ubiquitous Technical Surveillance. Department of Justice, Office of the Inspector General. https://oig.justice.gov/sites/default/files/reports/25-065.pdf

EyeSpySupply. (2023, December). The Importance of TSCM Equipment for Security. Blog. https://blog.eyespysupply.com/2023/12/29/the-importance-of-tscm-equipment-for-security/

Pinkerton. (2022, July). Technical Surveillance Countermeasures to Prevent Corporate Espionage. https://pinkerton.com/our-insights/blog/technical-surveillance-countermeasures-to-prevent-corporate-espionage

REI. (2025). Research Electronics Institute. TSCM Equipment and Training. https://reiusa.net/

Friday, June 27, 2025

Disturbing Revelations - Annual Assessment of the IRS’s Information Technology Program

The Treasury Inspector General for Tax Administration (TIGTA) released its annual assessment of the IRS’s Information Technology (IT) Program for 2024. This review, based on audit reports from TIGTA and the Government Accountability Office (GAO), paints a mixed picture: while progress has been made in some areas, significant vulnerabilities and management failures persist. These issues threaten the security of taxpayer data, the effectiveness of IRS operations, and public trust in the agency.

Summary of Findings

The IRS is a massive and complex organization, collecting $5.1 trillion in federal tax payments and processing 267 million tax returns and forms in FY 2024. Its reliance on computerized systems is absolute, making IT security and modernization paramount. Despite efforts to modernize and secure its systems, the IRS faces mounting challenges due to funding cuts, workforce reductions, and persistent weaknesses in cybersecurity, access controls, and IT asset management.

Audits revealed that while the IRS is making strides in areas like identity proofing for its Direct File pilot and blocking suspicious email websites, it falls short in critical cybersecurity functions, proper management of user access, timely vulnerability remediation, and oversight of cloud services. Insider threats, incomplete audit trails, and inadequate separation of duties further exacerbate the risks.

Some Disturbing Revelations

  • The IRS’s cybersecurity program was rated “not fully effective,” failing in three of five core cybersecurity functions (Identify, Protect, Detect), including shortcomings in system inventories, vulnerability remediation, encryption, and multifactor authentication.
  • 279 former IRS users retained access to sensitive systems for up to 502 days after separation, exposing taxpayer data to unauthorized access and potential misuse.
  • The IRS failed to timely remediate tens of thousands of critical and high-risk vulnerabilities, including 2,048 critical and 13,558 high-risk vulnerabilities in a single security application environment.
  • Personally Identifiable Information (PII) for over 613,000 IRS user authentications was sent to unauthorized locations outside the U.S. due to a vendor’s flaw in the Login.gov system, placing sensitive data at risk.
  • The IRS was unable to locate all cloud services contracts or determine their value for nearly half of its cloud applications, undermining financial oversight and increasing the risk of waste or duplication.
  • 35% of IRS systems required to send audit trails for detecting unauthorized access to PII and Federal Tax Information failed to do so, severely limiting the ability to investigate or detect data breaches.
  • The IRS did not fully comply with federal mandates to block TikTok on government devices, leaving more than 2,800 mobile devices and 900 computers potentially exposed to foreign surveillance risks.
  • Inadequate separation of duties was found in 70% of reviewed cloud systems, with the same individuals controlling multiple key roles, heightening the risk of fraud or error going undetected.
  • The IRS’s data loss prevention controls could be circumvented, allowing users to intentionally exfiltrate sensitive taxpayer data despite existing monitoring tools.
  • Despite identifying 334 legacy systems needing updates or retirement, only 2 had specific decommissioning plans, leaving the IRS reliant on outdated, potentially insecure systems.

The findings underscore the need for the IRS to address IT security and management deficiencies. Without corrective action, the agency remains vulnerable to internal and external threats, risking taxpayer privacy, financial integrity, and the effective administration of the nation’s tax system.

Read the full report at this link: https://www.tigta.gov/sites/default/files/reports/2025-06/20252S0007fr.pdf

Friday, June 13, 2025

Recruiters Targeted by Fake Job Seekers in Malware Scam

Recruiters are facing a cyber threat as financially motivated hackers, notably the FIN6 group (also known as Skeleton Spider), shift tactics to social engineering campaigns. The attackers are posing as job seekers on popular platforms like LinkedIn and Indeed, luring unsuspecting recruiters into downloading malware via fake portfolio websites.

How the Scam Works

The scam starts when cybercriminals, pretending to be legitimate job applicants, reach out to recruiters through job-hunting platforms. After initial contact, they send a follow-up phishing email that directs the recruiter to a convincing online portfolio site. These sites, often hosted on Amazon Web Services (AWS), mimic authentic job seeker pages, sometimes using plausible names associated with the applicant.

To evade automated security systems, the phishing emails do not contain clickable hyperlinks. Instead, recruiters are prompted to manually type the provided web address into their browser, which helps the attackers bypass link-detection tools[1].

The Malware: More_eggs

Once on the fake portfolio site, the recruiter is asked to complete a CAPTCHA and other checks to prove they are human, further evading automated scanners. If they proceed, they are offered a ZIP file to download—purportedly a resume or work sample. Inside the ZIP is a Windows shortcut (.LNK) file that, when opened, executes a hidden JavaScript payload using wscript.exe. This payload connects to the attackers' command-and-control server and installs the More_eggs backdoor.

More_eggs is a modular, JavaScript-based malware-as-a-service tool that allows attackers to:

  • Remotely execute commands
  • Steal credentials
  • Deliver additional malicious payloads

Notably, More_eggs operates in the memory of the users device, making it harder for traditional antivirus solutions to detect.

Evasion Tactics

FIN6 leverages several techniques to avoid detection and takedown:

  • Anonymous Domain Registration: Domains are registered through GoDaddy with privacy services, obscuring the true identity of the registrants[1].
  • Cloud Hosting: Hosting malicious sites on AWS infrastructure provides legitimacy and resilience against quick takedowns[1].
  • Human Verification: CAPTCHAs and environmental checks ensure only real users (not automated scanners) reach the malware download stage[1].

Industry Response

AWS responded to the incident by reaffirming its commitment to enforcing its terms of service and collaborating with the security research community. The company encourages reporting of any suspected abuse through its dedicated channels for swift action.

Takeaways for Recruiters and Organizations

This campaign highlights the evolving landscape of cyber threats, where even those in hiring roles are now prime targets. Key steps for recruiters and organizations to protect themselves include:

  • Treat unsolicited portfolio links with suspicion, especially if they require manual entry into a browser.
  • Avoid downloading ZIP files or clicking on shortcut files from unknown or untrusted sources.
  • Ensure endpoint security solutions are updated and capable of detecting in-memory malware.
  • Report suspicious activity to IT or security teams immediately.

Recruiters and organization should be aware of the attacks and use caution with job applicants.

References




Thursday, June 12, 2025

Disturbing Spying Revelations: Meta/Facebook/Instagram & Yandex

Overview:

The web page https://localmess.github.io/ discloses a previously undocumented and highly invasive tracking technique used by Meta (Facebook/Instagram) and Yandex that affected billions of Android users. Researchers [4] discovered that this method covertly linked users' mobile web browsing sessions to their identities in native apps, bypassing standard privacy protections. 

The practice was active until early June 2025, when both Meta and Yandex, after being caught with their hands in the proverbial PII cookie-jar, ceased these behaviors following public disclosure [1][2][3].

Key Findings

1. Covert Web-to-App Tracking via Localhost on Android

·       Meta and Yandex embedded scripts (Meta Pixel and Yandex Metrica) on millions of websites.

·       When a user visited such a site in a mobile browser on Android, the script would communicate directly with native apps (like Facebook, Instagram, or Yandex Maps) installed on the same device.

·       This communication happened via localhost sockets—special network ports on the device that allow apps to talk to each other without user knowledge or consent [1][3].

2. How the Tracking Worked

·       Meta Pixel:

o   The Meta Pixel JavaScript sent the browser’s _fbp cookie (used for advertising and analytics) to Meta apps via WebRTC (using STUN/TURN protocols) on specific UDP ports (12580–12585).

o   Native Facebook and Instagram apps listened on these ports in the background, received the _fbp value, and linked it to the user’s app identity, effectively de-anonymizing web visits[1][3].

o   This bypassed protections like cookie clearing, incognito mode, and Android permission controls.

·       Yandex Metrica:

o   Yandex’s script sent HTTP/HTTPS requests with tracking data to localhost ports (29009, 29010, 30102, 30103), where Yandex apps listened.

o   The apps responded with device identifiers (e.g., Android Advertising ID), which the script then sent to Yandex servers, bridging web and app identities[1].

3. Privacy and Security Implications

·       This method allowed companies to:

o   Circumvent privacy mechanisms such as incognito mode, cookie deletion, and even Android’s app sandboxing.

o   Link browsing habits and cookies with persistent app/user identifiers, creating a cross-context profile of the user.

o   Potentially expose browsing history to any third-party app that listened on those ports, raising the risk of malicious exploitation[1][3].

4. Prevalence

·       Meta Pixel was found on over 5.8 million websites; Yandex Metrica on nearly 3 million.

·       In crawling studies, thousands of top-ranked sites were observed attempting localhost communications, often before users had given consent to tracking cookies[1].

5. Timeline and Disclosure

·       Yandex has used this technique since 2017; Meta adopted similar methods in late 2024.

·       Following responsible disclosure to browser vendors and public reporting in June 2025, both companies stopped the practice. Major browsers (Chrome, Firefox, DuckDuckGo, Brave) have since implemented or are developing mitigations to block such localhost abuse[1][3]

Technical Details

Aspect

Meta/Facebook Pixel

Yandex Metrica

Communication Method

WebRTC STUN/TURN to UDP ports (12580–12585)

HTTP/HTTPS requests to TCP ports (29009, etc.)

Data Shared

_fbp cookie, browser metadata, page URLs

Device IDs (AAID), browser metadata

Apps Involved

Facebook, Instagram

Yandex Maps, Browser, Navigator, etc.

User Awareness

None; bypassed consent and privacy controls

None; bypassed consent and privacy controls

Platform Affected

Android only (no evidence for iOS or desktop)

Android only (no evidence for iOS or desktop)

Risk of Abuse

High: enables de-anonymization, history leakage

High: enables de-anonymization, history leakage

Broader Implications

·       Bypassing Privacy Controls:
This method undermined the effectiveness of cookie controls, incognito/private browsing, and Android’s app isolation, showing that even sophisticated privacy tools can be circumvented by creative inter-app communications
[1][3].

·       Need for Platform-Level Fixes:
Browser and OS vendors are now patching this specific exploit, but the underlying issue—unrestricted localhost socket access—remains a systemic risk on Android. The researchers call for stricter platform policies and user-facing controls for localhost access
[1].

·       User and Developer Awareness:
Most website owners were unaware their sites enabled this tracking. End-users had no indication or control over the process. The lack of transparency and documentation from Meta and Yandex is highlighted as a major concern
[1].

Conclusion

The research revealed a disturbing tracking vector that allowed Meta and Yandex to link users’ web and app identities on Android at a massive scale, defeating standard privacy safeguards. The disclosure led to rapid mitigation, but the incident underscores the need for deeper systemic changes in how browsers and mobile platforms handle inter-app communications and tracking[1][2][3]. “This tracking method defeats Android's inter-process isolation and tracking protections based on partitioning, sandboxing, or clearing client-side state.”[1]

1.      https://localmess.github.io

2.      https://www.grc.com/sn/sn-1029-notes.pdf

3.      https://gigazine.net/gsc_news/en/20250604-meta-yandex-tracking/

4.      Researchers & Authors of the localmess github page: Aniketh Girish (PhD student),  Gunes Acar (Assistant Professor),  Narseo Vallina-Rodriguez (Associate Professor), Nipuna Weerasekara (PhD student), Tim Vlummens (PhD student).

Note: Perplexity.AI was used to assist in preparing this report.