Sunday, September 14, 2025

50 Cybersecurity Tips for Personal & Business Protection

Cybersecurity is no longer optional—it’s essential. Cyber threats such as phishing, ransomware, andidentity theft continue to rise, impacting both individuals and businesses. Implementing solid cybersecurity practices can strengthen your online safety and protect sensitive data.

This guide provides 50 cybersecurity tips to improve your security posture, covering personal safety, workplace security, data protection, and more.


General Cybersecurity Tips

Understand Cybersecurity Risks: Anyone can be a target of a cyberattack, not just large organizations.

  • Use Strong, Unique Passwords: Create complex passwords and avoid reusing them. 
  • Enable Two-Factor Authentication (2FA): Adds an extra login barrier against credential theft. 
  • Keep Software Updated: Updates fix vulnerabilities and prevent malware infections. 
  • Back Up Data Regularly: Use encrypted cloud storage or external drives. 
  • Avoid Public Wi-Fi for Banking or Work: Use a VPN for secure browsing. 
  • Beware of Phishing Emails: Always double-check the sender before clicking links.
  • Secure Your Home Wi-Fi: Change default router credentials and use WPA3 encryption.
  • Use Antivirus/Anti-Malware Software: Select reputable security solutions.
  • Check Privacy Settings: Manage what information you share on social media.
 

Device Security

  • Lock devices with strong PINs or biometrics.
  • Avoid public charging stations—carry your own cables and adapters.
  • Use a standard user account instead of an admin account for daily tasks.
  • Encrypt sensitive files to prevent unauthorized access.
  • Regularly patch IoT devices and change default credentials.


Email and Internet Use

  • Double-check sender information to avoid email spoofing. 
  • Never click on unknown links. 
  • Use secure, up-to-date browsers. 
  • Clear cache and cookies frequently. 
  • Download apps only from trusted marketplaces.
 

Workplace Cybersecurity

  • Train employees on phishing awareness and password hygiene. 
  • Use multi-factor authentication (MFA) for company logins.
  • Establish and update written security policies.
  • Perform regular penetration testing and security audits.
  • Rely on encrypted communication tools for business.
 

Data Protection

  • Enforce "minimum necessary" access to internal files.
  • Monitor data transfers to detect shadow IT usage. 
  • Apply data loss prevention (DLP) tools.
  • Encrypt and secure cloud-stored files.
  • Update written policies to reflect new threat landscapes.
 

Incident Response

  • Develop a written incident response plan.
  • Train with simulated breach scenarios.
  • Encourage instant reporting of suspicious behaviors.
  • Contain attacks quickly to minimize damage.
  • Keep clients, regulators, and partners informed in case of breaches.


Physical Cybersecurity

  • Secure physical access controls in workspaces.
  • Install CCTV and remote monitoring for critical areas.
  • Shred sensitive records before disposal.
  • Deploy badge-based entry systems.
  • Implement MDM (mobile device management) for company smartphones.
 

Advanced Cybersecurity Measures 

  • Shift toward a Zero Trust Architecture.
  • Deploy EDR (Endpoint Detection and Response) tools.
  • Use network segmentation to isolate sensitive systems.
  • Integrate threat intelligence feeds. 
  • Partner with peer organizations to share best practices.
 

Personal Cybersecurity Practices

  • Disconnect when devices are not in use. 
  • Use trusted password managers like 1Password or Bitwarden. 
  • Be skeptical of free services that seem too good to be true. 
  • Check your online banking and email account history regularly. 
  • Research tools and apps before installation.
 

Why Cybersecurity Best Practices Matter

Implementing even a few of these cybersecurity tips can drastically reduce exposure to digital threats. From password safety to incident response readiness, both individuals and organizations must take proactive steps to minimize risk.

For additional resources, also read:

  • How to Protect Against Phishing Attacks 
  • Securing IoT Devices at Home and Work 
  • Top Cybersecurity Tools for Small Businesses
 

SIM Card Swapping: How to Prevent Account Takeover Fraud


SIM card swapping fraud can happen to anyone. If criminals succeed, they can steal a phone number, take over accounts, and access bank or cryptocurrency funds. Most people know to be cautious about phishing emails, but SIM swapping scams often go unnoticed and can be devastating?

What Is SIM Card Swapping?

Your phone number is stored on a small SIM chip in your device. If a criminal convinces your provider to move your number to a SIM card they control, they gain access to your calls and texts. This matters because:

  • Many companies send login codes and password resets by text.
  • Scammers intercept those codes and can break into important accounts.
  • Victims face stolen money, identity theft, and recovery headaches.

How SIM Card Swapping Happens

  • Fraudsters first gather details about a victim—address, date of birth, account numbers:
  • They call the mobile carrier, pretending to be the victim.
  • The victim’s phone stops working, and the scammer now controls the number.
  • They claim the phone is lost or broken, requesting a new SIM card.
  • The carrier activates that SIM, transferring the victim’s number.
  • SIM card swapping can occur with both physical SIM cards and eSIMs, as the underlying attack involves transferring a phone number or carrier profile rather than the physical card itself. 

Signs Your SIM Card Was Swapped

Watch for these red flags:

  • Phone suddenly has no service for calls or texts.
  • Family or friends say somebody else answered, or call attempts fail.
  • Alerts from bank, email, or other accounts about requested password changes

If any occur, contact your carrier immediately.


How to Protect Yourself

Defend against SIM swap fraud with these best practices:

  • Add a PIN or password to your mobile account: This makes impersonation harder for attackers.
  • Use app-based login codes (Google Authenticator, Authy) instead of SMS: These cannot be intercepted by SIM swapping.
  • Keep personal info private: The less a criminal knows, the harder their attack.
  • Monitor your accounts: Enable alerts for suspicious logins or money movements.

Set Up Extra SIM Protection with Your Carrier

Major carriers provide additional security—see their pages to set up SIM locks, account PINs, and fraud alerts:

  • Google Fi: Enable SIM Number Lock
  • AT&T: Add a wireless passcode
  • Verizon: Set an account PIN
  • T-Mobile: Enable account security PIN/Passcode

Be sure to visit your carrier's support pages for step-by-step instructions.


Final Takeaway

SIM card swapping isn’t about hacking a phone—it’s social engineering aimed at telecom providers. A few minutes setting up SIM locking and carrier PINs can prevent account takeover and protect identity. Think of SIM protection as adding a deadbolt before trouble happens—most criminals move on if it looks too difficult.

Take time today to secure your SIM and implement strong account protection. It could save stress, money, and time.

Saturday, September 13, 2025

Understanding False-Positive Hallucinations in AI Research: Implications for Academic Integrity

Generative artificial intelligence tools have revolutionized academic research, offering valuable support while introducing new challenges. Among the most pressing is the phenomenon of false-positive hallucinations. This article analyzes the nature, prevalence, and impact of hallucinations in AI-assisted academic work—and shares practical strategies for educators and students to address them.

What Are False-Positive Hallucinations in AI Research?

AI hallucinations occur when large language models confidently produce content that appears factual and authoritative, but is actually incorrect or fabricated. In academic contexts, false-positive hallucinations refer to AI-generated information that is presented as legitimate scholarly content, despite being entirely invented.

  • Hallucinations may be categorized by degree and type—such as acronym ambiguity, numeric errors, or fabricated references.
  • Unlike deliberate human misinformation, these errors result from underlying probabilistic processes in AI models.

The most alarming academic hallucinations involve fake citations and references. AI can generate plausible author names, credible article titles, and authentic-looking journal details that do not exist in reality.

Common Types of Academic Hallucinations

  • Reference Fabrication: AI creates non-existent sources and citations.
  • Fact Fabrication: AI invents false statistics or study outcomes.
  • Expert Fabrication: AI attributes quotes or opinions to fictional or unrelated authorities.
  • Methodological Fabrication: AI describes studies or experiments that never occurred.

How Prevalent Are AI Hallucinations in Academia?

False-positive hallucinations are a widespread issue across academic domains. Studies found that up to 69% of medical references generated by ChatGPT are fabricated, with many appearing professionally formatted. Leading legal AI tools also show hallucination rates between 17% and 33%, despite claims of being hallucination-free. Preliminary reviews reveal frequent generation of convincing—but entirely fictional—peer-reviewed sources.[2][3]

Notable Real-World Examples

Medical Research

ChatGPT has generated plausible journal article citations—complete with real researcher names—that simply do not exist. Such hallucinations pose a risk to medical decision-making if accepted as valid sources.

Legal Research

AI-powered legal research tools have created citations to fabricated court cases. These hallucinations often blend seamlessly with factual content, making them hard for experts and instructors to identify.

Academic Writing

AI has also invented fake conferences, institutions, and journal articles formatted with realistic details, misleading users and undermining academic credibility.

Should Students Be Required to Provide URLs for Sources?

Arguments in Favor

  • Direct URLs help verify the existence of sources.
  • Reduce risk of accepting hallucinated material.
  • Streamline instructors’ source checking.
  • Encourage lifelong habits of verification.

Arguments Against

  • Print and paywalled sources may not have URLs.
  • Could bias research toward online materials.
  • Increases the work required for students and instructors.
  • URL availability does not guarantee accuracy.

Balanced Solution

Require URLs, DOIs, or ISBNs for major claims where available—but teach broader verification and critical thinking alongside transparency about AI involvement.

Practical Strategies for Students

1. Verify Every Citation

  • Check references using library databases or search engines.
  • Cross-check key facts with multiple reliable sources.
  • Highlight statistical claims and ensure their credibility.
  • Use in-text citations linked to a comprehensive References section.

2. Use AI as a Supplement

  • Leverage AI for vocabulary and brainstorming, not complete research generation.
  • Critically review and refine AI suggestions.

3. Develop Critical Evaluation Skills

  • Question unlikely or overly perfect findings.
  • Probe for unsourced assumptions.
  • Ensure internal consistency across arguments and data.

4. Transparently Declare AI Use

  • State which parts of the work were assisted by AI.
  • Describe how references and facts were verified.

5. Combine Multiple Tools and Approaches

  • Compare outputs between different AI tools.
  • Use specialized hallucination detectors when available.
  • Seek human feedback from peers or instructors.

Conclusion: Balancing Integrity and Innovation

AI hallucinations present a significant challenge to academic integrity, threatening the reliability of research across fields. Rather than prohibiting AI, institutions should cultivate policies emphasizing transparency, verification, and critical skill-building. By combining the strengths of AI with rigorous human oversight, academia can continue to innovate—without sacrificing honesty and credibility. 

Securing Your Network: The TP-Link Controversy & Router Safety Tips

TP-Link, a major Chinese router manufacturer, is under investigation by U.S. authorities over national security concerns. A possible ban on its products in the U.S. is being considered, raising questions about cybersecurity, market dominance, and router safety for both home and business users.

Key Points of the TP-Link Investigation

  • Market Dominance: TP-Link controls about 65% of the U.S. market for home and small business routers .

  • Government Usage: TP-Link routers are deployed across federal agencies, including the Department of Defense and NASA .

  • Cybersecurity Concerns: Reports suggest Chinese hackers have compromised thousands of TP-Link routers to launch attacks on Western organizations .

  • Pricing Strategy: The DOJ is examining whether TP-Link’s below-market pricing strategy violates antitrust laws .

Potential Implications of a TP-Link Ban

If a ban on TP-Link devices is implemented, the U.S. router market could face major disruptions. A policy shift may happen as early as next year under the new administration . Such a move would leave millions of U.S. households and businesses searching for alternative router solutions.

Security Risks and Vulnerabilities in TP-Link Routers

  • Hacking Reports: Microsoft confirmed that a Chinese hacking group used compromised TP-Link routers in attacks on North American and European organizations .

  • CISA Alerts: The U.S. Cybersecurity and Infrastructure Security Agency identified vulnerabilities in TP-Link devices that could allow remote code execution .

  • Persistent Flaws: Researchers note that TP-Link routers often ship with unpatched security flaws, drawing criticism over poor vendor response .

How to Check if Your Router is Compromised

With router-based cyberattacks becoming more common, it’s important to detect signs of compromise early. Look out for:

  • Unexplained slow internet speeds

  • Difficulty logging into your router’s admin settings

  • Browser redirects to strange websites

  • Suspicious network activity during unusual hours

  • Unknown devices connected to your network

  • Unfamiliar software appearing on connected devices

Steps to Check Your Router

  1. Log into your router’s admin panel and review logs for suspicious activity .

  2. Check the device list for unknown or unauthorized entries .

  3. Verify DNS settings to ensure they haven’t been changed .

If you suspect a compromise:

  • Change the administrator password immediately.

  • Update your router’s firmware to the latest version.

  • Consider performing a factory reset for a clean start .

Final Takeaway: Protecting Your Network

The TP-Link controversy adds to growing concerns about router security and foreign-manufactured hardware. Regardless of brand, users should take proactive cybersecurity measures, keep firmware updated, and regularly monitor their networks for suspicious behavior.

Staying informed ensures that both home networks and businesses remain protected against evolving cyber threats.


References

Slashdot – U.S. Weighs Banning TP-Link Routers
Reuters – U.S. Considers Ban on TP-Link
NordVPN – Router Malware
Business Insider – TP-Link Pricing Debate
Ars Technica – U.S. Weighs Ban Over Security Concerns
Netgear Community – Has My Router Been Hacked?
Asia Financial – TP-Link Ban Report
CBS News – TP-Link Router Ban Considered
Keeper Security – Signs of Hacked Router
BleepingComputer – U.S. Considers Ban on TP-Link

What Is the Tech Stack?

A Tech Stack is the collection of technologies that powers modern websites, applications, and digital businesses. Whether developing software, launching SaaS, or deploying cloud solutions, understanding tech stacks is important for building technology platforms. In this blogpost, learn about tech stack layers, vendor examples, and why choosing the right stack matters for businesses.

Friday, September 12, 2025

Moral Ambivalence: Former CIA Agent Discusses Providing CSAM as a "Specialized Gift" to Foreign Targets

Andrew Bustamante states that he and his wife are former CIA agents. He has posted many videos and been interviewed widely on YouTube where he describes his exploits and explains some CIA operations.

In the following excerpt from a video (https://www.youtube.com/watch?v=LkOwKkivJ1E) he describes how the CIA would facilitate CSAM as a "specialized gift" to foreign targets who wanted it.

Tuesday, August 19, 2025

The Dark Side of Leadership and Accountability: The Ethics of Plausible Deniability

Plausible Deniability

Plausible deniability is the capacity of an individual—often a senior official or leader—to credibly deny knowledge of or responsibility for illicit or unethical actions carried out by subordinates or associates, due to a lack of direct evidence linking them to those actions. The denial remains "plausible" because the circumstances or absence of proof prevent conclusive attribution, even if the individual was involved or willfully ignorant. This practice often involves deliberately structuring relationships and communications to ensure deniability, enabling those in authority to escape blame or legal consequences if activities are exposed.

Examples

Creating plausible deniability in the context of information technology and digital forensics may involve technical mechanisms or strategies that enable a person to credibly deny knowledge of, or control over, certain data or actions. Examples include:

  • Deniable Encrypted File Systems: Software such as VeraCrypt enables users to create hidden encrypted volumes within other encrypted containers. If compelled to reveal a password, a user can provide access to the “outer” volume while denying the existence of the “hidden” one. The existence of the hidden volume typically cannot be proven through standard forensic methods if configured correctly.
  • Hidden Operating Systems: VeraCrypt also supports the creation of a hidden OS within an encrypted partition. If a device is seized, the user can provide credentials for the decoy OS while maintaining plausible deniability about the hidden OS. Forensic detection becomes difficult if the hidden OS leaves no traces outside its partition.
  • Deniable Communication Protocols: Messaging solutions like Signal employ deniable authentication. Even if a transcript of communications is captured, it may be difficult for a third party to decrypt and prove who authored or participated in a conversation.
  • Anonymous Accounts: The creation and use of anonymous online accounts and pseudonymous email addresses allow users to plausibly deny authorship or control of content, as nothing is directly tied to their real identity if all technical precautions are maintained.
  • Obfuscation and Metadata Removal: Removing or falsifying metadata from documents, images, or other digital evidence can make attribution of authorship or origin difficult, supporting plausible deniability for content creators or transmitters.

These methods can be used to protect privacy and sensitive data, but they can also be abused to frustrate investigations and provide cover for illicit activity.

Challenging and Disproving Denials

Plausible deniability can be legally challenged or disproved in some situations, particularly when there is sufficient evidence to show that a person in authority did, in fact, have knowledge of or involvement in the questionable actions. Common scenarios in which plausible deniability fails or is overcome in court include:

  • Direct or Circumstantial Evidence: If investigators or prosecutors uncover direct evidence (such as emails, messages, recorded conversations, or documents) tying the individual to the actions, deniability collapses. Even strong circumstantial evidence can establish knowledge or intent, undermining plausible deniability.
  • Command Responsibility Doctrine: In military, law enforcement, or organizational contexts, leaders can be held legally responsible for the actions of subordinates if they knew or should have known about illegal acts and failed to prevent or punish them. Plausible deniability is not a defense if it can be shown that an official intentionally remained ignorant or deliberately failed to supervise.
  • Willful Blindness: Courts may challenge claims of plausible deniability if they find that a person “deliberately avoided” acquiring knowledge, a doctrine known as willful blindness. A person cannot escape liability simply by intentionally avoiding learning about potentially illegal activities.
  • Patterns of Conduct: Repeated patterns of behavior, communication, or organizational structure can indicate a deliberate attempt to insulate higher-ups from information while still enabling or authorizing misconduct.
  • Pleading Standards in Civil Cases: Under modern pleading standards (see Twombly, Iqbal), allegations must be plausible, not just possible. If a plaintiff presents enough factual content to allow an inference that the defendant was aware or involved, plausible deniability can be challenged at the motion to dismiss stage.
  • Legal Precedents: In cases such as Ashcroft v. Iqbal, the U.S. Supreme Court addressed whether defendants could be held liable if they were aware of subordinates’ actions, even if they denied direct involvement. The courts look for factual allegations that make liability plausible, not just possible.

Ashcroft v. Iqbal

In Ashcroft v. Iqbal (2009), the U.S. Supreme Court addressed what makes government officials’ liability claims “plausible” rather than merely possible. Javaid Iqbal alleged that officials, including former Attorney General Ashcroft and FBI Director Mueller, discriminated against him after 9/11. Iqbal lost at the Supreme Court. The Court held that Iqbal’s complaint did not contain enough specific factual content to plausibly suggest that Ashcroft and Mueller personally adopted discriminatory detention policies after 9/11. The Court found that Iqbal’s claims were based mostly on general accusations and lacked specific factual content tying Ashcroft and Mueller to unconstitutional conduct. The Court ruled that plausible deniability could hold if a complaint alleges only that high-level officials “knew of, condoned, and willfully and maliciously agreed to subject” someone to abuse “as a matter of policy.” The complaint must contain enough factual content to plausibly suggest a direct link, not just a possible inference, to overcome denials and survive a motion to dismiss. Plausible deniability is therefore protected unless the assertions are substantiated by facts allowing a reasonable inference of personal responsibility.

Summary

In summary, plausible deniability is not absolute—legal systems have developed doctrines and standards (such as command responsibility, willful blindness, and specific pleading requirements) to pierce denials when sufficient evidence exists that a person knew of or participated in the conduct in question.

Associated Resources

  • Studies discussing paradoxical leadership behavior, which often touches on the dark side of leadership and accountability, appear in peer-reviewed leadership and organizational psychology publications. See: Lee, A., Lyubovnikova, J., Zheng, Y., & Li, Z. F. (2023). Paradoxical leadership: A meta-analytical review. Frontiers in Organizational Psychology, 1, 1229543. https://doi.org/10.3389/forgp.2023.1229543
  • The doctrine of “command responsibility” is an important subject in legal scholarship, especially in international law and military law reviews. See: Chantal Meloni, Command Responsibility: Mode of Liability for the Crimes of Subordinates or Separate Offence of the Superior?, Journal of International Criminal Justice, Volume 5, Issue 3, July 2007, Pages 619–637, https://doi.org/10.1093/jicj/mqm029
  • Foundational legal analysis for willful blindness and command responsibility can be found in discussions of U.S. v. Jewell, Ashcroft v. Iqbal, and Twombly v. Bell Atlantic Corp. See legal journals and Supreme Court case analyses for detailed precedent.

Monday, July 07, 2025

Ubiquitous Technical Surveillance & Countermeasures: Existential Threats & Mitigations

Ubiquitous Technical Surveillance (UTS) is the widespread collection and analysis of data from various sources—ranging from visual and electronic devices to financial and travel records—for the purpose of connecting individuals, events, and locations. 

This surveillance poses risks to government operations, business organizations, and individuals alike, threatening to compromise sensitive investigations, personal privacy, and organizational security. The surprising findings of a recent audit of FBI techniques to address UTS further heighten the need for awareness and response to the threats. 

As the sophistication and reach of surveillance technologies continue to grow, understanding the nature of UTS and implementing effective Technical Surveillance Countermeasures (TSCM) is essential for safeguarding sensitive information and ensuring operational integrity. This work explores UTS and TSCM and suggests mitigation strategies to combat the threats.

Overview

Ubiquitous Technical Surveillance (UTS) refers to the pervasive collection and analysis of data including visual, electronic, financial, travel, and online for the purpose of connecting individuals, events, and locations. The significance of the threats is outlined in a recently declassified but heavily redacted DOJ/OIG audit of the FBI's response to UTS (DOJ, 2025). Based on the number of redactions, particularly from the CIA's section of the report, it is reasonable to imagine that many incidents have occurred that have not been reported to the public.

Technical Surveillance Countermeasures (TSCM) refers to specialized procedures and techniques designed to detect, locate, and neutralize unauthorized surveillance devices and eavesdropping threats. TSCM is commonly known as a "bug sweep" or "electronic counter-surveillance" and is used to protect sensitive information from being intercepted by covert listening devices, hidden cameras, or other forms of technical surveillance (REI, 2025), (Conflict International Limited, 2025).

UTS Devices, Data Sources, & Risks

Technical surveillance data collection can occur through a variety of devices and data sources including the following:

UTS is recognized as a significant and growing threat to government, business organizations, and individuals, with the potential to compromise investigations, business operations, and personal safety. When the collected technical surveillance information is in the wrong hands and used for nefarious purposes, harm can result.

UTS Threats

What are the UTS threats?

  • Significance: Described as an “existential threat” by the Central Intelligence Agency (CIA) due to its ability to compromise sensitive operations and personal safety (DOJ, 2025, p.4).

Risks:

  • Compromise of investigations, personnel PII, and sources (DOJ, 2025)
  • Exposure of operational details
  • Threats to personal and organizational security
  • Corporate espionage (Pinkerton, 2022)

Real-World UTS Scenarios

The following incidents are a sample of situations involving UTS.

  • Cartel Tracking via Phones and Cameras: Criminals exploited mobile phone data and city surveillance cameras to track and intimidate law enforcement and informants (DOJ, 2025, p.18).
  • Organized Crime and Phone Records: Crime groups used call logs and online searches to identify informants (DOJ, 2025, p.18).
  • Financial Metadata De-Anonymization: Commercial entities re-identified individuals from anonymized transaction data. Though this data is anonymized, in 2015, researchers from the Massachusetts Institute of Technology found that with the data from just four transactions, they could positively identify the cardholder 90% of the time. (DOJ, 2025, p.17).
  • Travel Data Correlation: Adversaries used travel records to reveal covert meetings and operational activities (DOJ, 2025, p.1).
  • Online Activity Analysis: Aggregated web and social media data to build detailed personal profiles (DOJ, 2025, p.1).
  • Visual Surveillance: Use of CCTV and smart devices for real-time tracking and event reconstruction.
  • Electronic Device Tracking: Exploitation of device signals and unique identifiers for location tracking.
  • Combined Data Exploitation: Overlaying multiple data sources to establish “patterns of life.”
  • Commercial Data Brokers: Purchase of large datasets for profiling and targeting.
  • Compromised Communications: Poorly secured communications exposing sensitive activities.

UTS Response: Organizational Challenges - FBI

The FBI identified UTS as an issue impacting the Bureau. However, a recently unclassified audit of the FBI's approach to UTS by the Office of Inspector General (OIG) identified several challenges and areas for improvement in the FBI's approach (DOJ, 2025, p.4).

OIG Audit of the FBI's Efforts (DOJ, 2025)

  • Red Team Analysis: Initial FBI efforts were high-level and did not fully address known vulnerabilities.
  • FBI Strategic Planning: Ongoing development, but lacking clear authority and coordination.
  • Training Gaps: Basic UTS training is mandatory for FBI personnel, but advanced training is limited and optional.
  • Incident Response: FBI Data breaches revealed policy gaps and lack of coordinated response.
  • Recommendations: The FBI needs comprehensive vulnerability documentation, strategic planning, clear authority, and expanded training.

Countermeasures & Best Practices

Combating the threats from UTS is a daunting challenge. Several steps can be taken to mitigate the threats.

Scenario-Specific Steps

Suggested General Countermeasures

  • Regular training on digital hygiene and counter-surveillance
  • Encryption of sensitive data and communications
  • Physical security for sensitive locations and devices
  • Vigilance and behavioral adaptation to signs of surveillance
  • Technical Surveillance Countermeasures (REI, 2025), (Conflict International Ltd, 2025), (EyeSpySupply, 2023).

Training & Awareness (DOJ, 2025)

  • Basic UTS Awareness: Should be mandatory for all FBI personnel.
  • Advanced UTS Training: Recommended for high-risk FBI roles; should be expanded and resourced.
  • Continuous Learning: Stay updated on emerging threats and countermeasures.

Incident Response Recommendations from the OIG Audit of the FBI (DOJ, 2025)

  • FBI should establish clear lines of authority for UTS incidents.
  • FBI should develop and rehearse coordinated response plans.
  • FBI should regularly review and update internal controls and policies.

Summary

The growing sophistication and reach of surveillance technologies have made UTS a threat to government operations, business organizations, and individuals. Real-world incidents demonstrate how adversaries exploit mobile phone data, surveillance cameras, financial transactions, and travel records to compromise investigations, expose operational details, and threaten personal and organizational security.

The FBI, recognizing UTS as an existential threat, has faced challenges such as insufficient planning, limited training, and gaps in incident response.

Technical Surveillance Countermeasures (TSCM), including procedures like bug sweeps and electronic counter-surveillance, are tools for detecting and mitigating unauthorized surveillance devices. Best practices for mitigation include regular training, encryption, physical security, and continuous awareness of emerging threats.

Conclusion

The risks posed by UTS are immediate and evolving, with the potential to undermine investigations, compromise privacy, and threaten organizational integrity. Effective countermeasures require a combination of technical solutions, organizational policies, and training. The findings of the OIG audit of the FBI highlight the need for clear authority, coordinated response plans, and regular updates to internal controls. As surveillance technologies continue to advance, adopting a proactive and comprehensive approach to counter-surveillance is important for safeguarding information and maintaining operational security.

References

Conflict International Ltd. (2025, June). Bug Sweeps (TSCM): Protecting Against AirTag Stalking and Modern Surveillance. https://conflictinternational.com/news/bug-sweeps-tscm-protecting-against-airtag-stalking-and-modern-surveillance

DOJ. (2025, June). Audit of the Federal Bureau of Investigation's Efforts to Mitigate the Effects of Ubiquitous Technical Surveillance. Department of Justice, Office of the Inspector General. https://oig.justice.gov/sites/default/files/reports/25-065.pdf

EyeSpySupply. (2023, December). The Importance of TSCM Equipment for Security. Blog. https://blog.eyespysupply.com/2023/12/29/the-importance-of-tscm-equipment-for-security/

Pinkerton. (2022, July). Technical Surveillance Countermeasures to Prevent Corporate Espionage. https://pinkerton.com/our-insights/blog/technical-surveillance-countermeasures-to-prevent-corporate-espionage

REI. (2025). Research Electronics Institute. TSCM Equipment and Training. https://reiusa.net/

Friday, June 27, 2025

Disturbing Revelations - Annual Assessment of the IRS’s Information Technology Program

The Treasury Inspector General for Tax Administration (TIGTA) released its annual assessment of the IRS’s Information Technology (IT) Program for 2024. This review, based on audit reports from TIGTA and the Government Accountability Office (GAO), paints a mixed picture: while progress has been made in some areas, significant vulnerabilities and management failures persist. These issues threaten the security of taxpayer data, the effectiveness of IRS operations, and public trust in the agency.

Summary of Findings

The IRS is a massive and complex organization, collecting $5.1 trillion in federal tax payments and processing 267 million tax returns and forms in FY 2024. Its reliance on computerized systems is absolute, making IT security and modernization paramount. Despite efforts to modernize and secure its systems, the IRS faces mounting challenges due to funding cuts, workforce reductions, and persistent weaknesses in cybersecurity, access controls, and IT asset management.

Audits revealed that while the IRS is making strides in areas like identity proofing for its Direct File pilot and blocking suspicious email websites, it falls short in critical cybersecurity functions, proper management of user access, timely vulnerability remediation, and oversight of cloud services. Insider threats, incomplete audit trails, and inadequate separation of duties further exacerbate the risks.

Some Disturbing Revelations

  • The IRS’s cybersecurity program was rated “not fully effective,” failing in three of five core cybersecurity functions (Identify, Protect, Detect), including shortcomings in system inventories, vulnerability remediation, encryption, and multifactor authentication.
  • 279 former IRS users retained access to sensitive systems for up to 502 days after separation, exposing taxpayer data to unauthorized access and potential misuse.
  • The IRS failed to timely remediate tens of thousands of critical and high-risk vulnerabilities, including 2,048 critical and 13,558 high-risk vulnerabilities in a single security application environment.
  • Personally Identifiable Information (PII) for over 613,000 IRS user authentications was sent to unauthorized locations outside the U.S. due to a vendor’s flaw in the Login.gov system, placing sensitive data at risk.
  • The IRS was unable to locate all cloud services contracts or determine their value for nearly half of its cloud applications, undermining financial oversight and increasing the risk of waste or duplication.
  • 35% of IRS systems required to send audit trails for detecting unauthorized access to PII and Federal Tax Information failed to do so, severely limiting the ability to investigate or detect data breaches.
  • The IRS did not fully comply with federal mandates to block TikTok on government devices, leaving more than 2,800 mobile devices and 900 computers potentially exposed to foreign surveillance risks.
  • Inadequate separation of duties was found in 70% of reviewed cloud systems, with the same individuals controlling multiple key roles, heightening the risk of fraud or error going undetected.
  • The IRS’s data loss prevention controls could be circumvented, allowing users to intentionally exfiltrate sensitive taxpayer data despite existing monitoring tools.
  • Despite identifying 334 legacy systems needing updates or retirement, only 2 had specific decommissioning plans, leaving the IRS reliant on outdated, potentially insecure systems.

The findings underscore the need for the IRS to address IT security and management deficiencies. Without corrective action, the agency remains vulnerable to internal and external threats, risking taxpayer privacy, financial integrity, and the effective administration of the nation’s tax system.

Read the full report at this link: https://www.tigta.gov/sites/default/files/reports/2025-06/20252S0007fr.pdf

Friday, June 13, 2025

Recruiters Targeted by Fake Job Seekers in Malware Scam

Recruiters are facing a cyber threat as financially motivated hackers, notably the FIN6 group (also known as Skeleton Spider), shift tactics to social engineering campaigns. The attackers are posing as job seekers on popular platforms like LinkedIn and Indeed, luring unsuspecting recruiters into downloading malware via fake portfolio websites.

How the Scam Works

The scam starts when cybercriminals, pretending to be legitimate job applicants, reach out to recruiters through job-hunting platforms. After initial contact, they send a follow-up phishing email that directs the recruiter to a convincing online portfolio site. These sites, often hosted on Amazon Web Services (AWS), mimic authentic job seeker pages, sometimes using plausible names associated with the applicant.

To evade automated security systems, the phishing emails do not contain clickable hyperlinks. Instead, recruiters are prompted to manually type the provided web address into their browser, which helps the attackers bypass link-detection tools[1].

The Malware: More_eggs

Once on the fake portfolio site, the recruiter is asked to complete a CAPTCHA and other checks to prove they are human, further evading automated scanners. If they proceed, they are offered a ZIP file to download—purportedly a resume or work sample. Inside the ZIP is a Windows shortcut (.LNK) file that, when opened, executes a hidden JavaScript payload using wscript.exe. This payload connects to the attackers' command-and-control server and installs the More_eggs backdoor.

More_eggs is a modular, JavaScript-based malware-as-a-service tool that allows attackers to:

  • Remotely execute commands
  • Steal credentials
  • Deliver additional malicious payloads

Notably, More_eggs operates in the memory of the users device, making it harder for traditional antivirus solutions to detect.

Evasion Tactics

FIN6 leverages several techniques to avoid detection and takedown:

  • Anonymous Domain Registration: Domains are registered through GoDaddy with privacy services, obscuring the true identity of the registrants[1].
  • Cloud Hosting: Hosting malicious sites on AWS infrastructure provides legitimacy and resilience against quick takedowns[1].
  • Human Verification: CAPTCHAs and environmental checks ensure only real users (not automated scanners) reach the malware download stage[1].

Industry Response

AWS responded to the incident by reaffirming its commitment to enforcing its terms of service and collaborating with the security research community. The company encourages reporting of any suspected abuse through its dedicated channels for swift action.

Takeaways for Recruiters and Organizations

This campaign highlights the evolving landscape of cyber threats, where even those in hiring roles are now prime targets. Key steps for recruiters and organizations to protect themselves include:

  • Treat unsolicited portfolio links with suspicion, especially if they require manual entry into a browser.
  • Avoid downloading ZIP files or clicking on shortcut files from unknown or untrusted sources.
  • Ensure endpoint security solutions are updated and capable of detecting in-memory malware.
  • Report suspicious activity to IT or security teams immediately.

Recruiters and organization should be aware of the attacks and use caution with job applicants.

References




Thursday, June 12, 2025

Disturbing Spying Revelations: Meta/Facebook/Instagram & Yandex

Overview:

The web page https://localmess.github.io/ discloses a previously undocumented and highly invasive tracking technique used by Meta (Facebook/Instagram) and Yandex that affected billions of Android users. Researchers [4] discovered that this method covertly linked users' mobile web browsing sessions to their identities in native apps, bypassing standard privacy protections. 

The practice was active until early June 2025, when both Meta and Yandex, after being caught with their hands in the proverbial PII cookie-jar, ceased these behaviors following public disclosure [1][2][3].

Key Findings

1. Covert Web-to-App Tracking via Localhost on Android

·       Meta and Yandex embedded scripts (Meta Pixel and Yandex Metrica) on millions of websites.

·       When a user visited such a site in a mobile browser on Android, the script would communicate directly with native apps (like Facebook, Instagram, or Yandex Maps) installed on the same device.

·       This communication happened via localhost sockets—special network ports on the device that allow apps to talk to each other without user knowledge or consent [1][3].

2. How the Tracking Worked

·       Meta Pixel:

o   The Meta Pixel JavaScript sent the browser’s _fbp cookie (used for advertising and analytics) to Meta apps via WebRTC (using STUN/TURN protocols) on specific UDP ports (12580–12585).

o   Native Facebook and Instagram apps listened on these ports in the background, received the _fbp value, and linked it to the user’s app identity, effectively de-anonymizing web visits[1][3].

o   This bypassed protections like cookie clearing, incognito mode, and Android permission controls.

·       Yandex Metrica:

o   Yandex’s script sent HTTP/HTTPS requests with tracking data to localhost ports (29009, 29010, 30102, 30103), where Yandex apps listened.

o   The apps responded with device identifiers (e.g., Android Advertising ID), which the script then sent to Yandex servers, bridging web and app identities[1].

3. Privacy and Security Implications

·       This method allowed companies to:

o   Circumvent privacy mechanisms such as incognito mode, cookie deletion, and even Android’s app sandboxing.

o   Link browsing habits and cookies with persistent app/user identifiers, creating a cross-context profile of the user.

o   Potentially expose browsing history to any third-party app that listened on those ports, raising the risk of malicious exploitation[1][3].

4. Prevalence

·       Meta Pixel was found on over 5.8 million websites; Yandex Metrica on nearly 3 million.

·       In crawling studies, thousands of top-ranked sites were observed attempting localhost communications, often before users had given consent to tracking cookies[1].

5. Timeline and Disclosure

·       Yandex has used this technique since 2017; Meta adopted similar methods in late 2024.

·       Following responsible disclosure to browser vendors and public reporting in June 2025, both companies stopped the practice. Major browsers (Chrome, Firefox, DuckDuckGo, Brave) have since implemented or are developing mitigations to block such localhost abuse[1][3]

Technical Details

Aspect

Meta/Facebook Pixel

Yandex Metrica

Communication Method

WebRTC STUN/TURN to UDP ports (12580–12585)

HTTP/HTTPS requests to TCP ports (29009, etc.)

Data Shared

_fbp cookie, browser metadata, page URLs

Device IDs (AAID), browser metadata

Apps Involved

Facebook, Instagram

Yandex Maps, Browser, Navigator, etc.

User Awareness

None; bypassed consent and privacy controls

None; bypassed consent and privacy controls

Platform Affected

Android only (no evidence for iOS or desktop)

Android only (no evidence for iOS or desktop)

Risk of Abuse

High: enables de-anonymization, history leakage

High: enables de-anonymization, history leakage

Broader Implications

·       Bypassing Privacy Controls:
This method undermined the effectiveness of cookie controls, incognito/private browsing, and Android’s app isolation, showing that even sophisticated privacy tools can be circumvented by creative inter-app communications
[1][3].

·       Need for Platform-Level Fixes:
Browser and OS vendors are now patching this specific exploit, but the underlying issue—unrestricted localhost socket access—remains a systemic risk on Android. The researchers call for stricter platform policies and user-facing controls for localhost access
[1].

·       User and Developer Awareness:
Most website owners were unaware their sites enabled this tracking. End-users had no indication or control over the process. The lack of transparency and documentation from Meta and Yandex is highlighted as a major concern
[1].

Conclusion

The research revealed a disturbing tracking vector that allowed Meta and Yandex to link users’ web and app identities on Android at a massive scale, defeating standard privacy safeguards. The disclosure led to rapid mitigation, but the incident underscores the need for deeper systemic changes in how browsers and mobile platforms handle inter-app communications and tracking[1][2][3]. “This tracking method defeats Android's inter-process isolation and tracking protections based on partitioning, sandboxing, or clearing client-side state.”[1]

1.      https://localmess.github.io

2.      https://www.grc.com/sn/sn-1029-notes.pdf

3.      https://gigazine.net/gsc_news/en/20250604-meta-yandex-tracking/

4.      Researchers & Authors of the localmess github page: Aniketh Girish (PhD student),  Gunes Acar (Assistant Professor),  Narseo Vallina-Rodriguez (Associate Professor), Nipuna Weerasekara (PhD student), Tim Vlummens (PhD student).

Note: Perplexity.AI was used to assist in preparing this report.