Sunday, September 21, 2025

Leadership Case Study: Hiring and selection processes are imperfect - Predator Anthony Bunten, a.k.a. Sean Englbrecht

By Dr. Frank Kardasz  |  Editor Ava Gozo

Another Day in the ICAC Office...

Several years ago I was assigned as Task Force Commander for the Arizona Internet Crimes Against Children (ICAC) Task Force while employed as a Sergeant with the Phoenix Police Department Organized Crime Bureau.

We had openings in our unit for undercover investigators -- duties to include reactive and proactive undercover investigations involving CSAM; and also covertly posing in various personas for the purpose of catching the online predators who seek sex with minors in Internet chat rooms.

I advertised the openings in the weekly Phoenix PD newsletter that was subsequently distributed to our 2,500 sworn employees, and I received about fifteen applications.

I culled through the applications and chose the top five to invite into an interview process based on their seniority, experience and disciplinary history; because that was the department and union-approved method of initial selection and de-selection.

Among the applicants not chosen was a patrol officer named Anthony Bunten.  Phoenix PD is a big agency, in the fifth largest city in the US, and I had no other knowledge of Bunten. According to his file he was a four-year patrol officer with some excessive force discipline history and no investigative experience. His application was rejected as not-eligible.

Through our approved interview process we later selected a couple of qualified candidates for the job.

A Few Months Later

Fast forward to September 2002...

The Phoenix evening news' lead story reported that the Glendale Police Department had arrested Phoenix Police Officer Anthony Bunten.

Glendale Police reported that Bunten, age 34, identifying himself as "Tony," contacted a 13-year-old  boy on an Internet chat room, then arranged for a meeting at the boy's home in Glendale AZ. The boy's parents were away, but his 10-year-old brother walked in while the boy was having sex with Bunten.

The ten-year-old later reported the incident to his parents. The parents reported to Glendale PD. Glendale investigators identified Bunten because he had given his phone number to his victim and had used the Internet login name "Buntenaw".

Just after his arrest, Anthony Bunten strategically changed his name to Sean Englbrecht. That change then made it more difficult for a subsequent public records search to find him; and consequently his later sentencing was not reported in the media

Bunten was indicted on two counts of sexual conduct with a minor and one count of sexual exploitation of a minor-typically resulting in a maximum sentence of over 50 years prison.

Predator's Background

In his four-year career with Phoenix police, Bunten was disciplined three times, including an excessive-force incident where he pushed a drunken trespasser with his baton and challenged him to a fight. He received the maximum punishment short of termination, a 240-hour suspension. "The squad as a whole knows that Officer Bunten is 'high-strung,' " a fellow officer noted in a report. The other two discipline reports involved late paperwork. Nothing in Bunten's personnel file mentioned anything related to sexual misconduct. And Bunten passed a psychological examination and polygraph specifically addressing criminal sexual activity when he was hired.

Plea Deal

Subsequent to plea negotiations, the reduced charge against Bunten/Englbrecht was one count of attempt sexual conduct with a minor. The plea deals like this that get offenders off with lighter sentences are often negotiated in part to keep the victim from having to testify at trial and to otherwise push through the unending series of cases awaiting adjudication in a very busy court system.

His certification as an Arizona Peace officer was revoked. 

Predator Imprisoned

In 2003, Bunten-turned-Englbrecht was sentenced to 12 years prison.

His Arizona Department of Corrections web page shows that while incarcerated his work assignments included Carpentry, Conflict Resolution, Substance Abuse Brief Intervention, and Cultural Diversity.
He has a tattoo of a machine gun on his right bicep with "Dominate  Eliminate  Control"

Today

Anthony Bunten a.k.a. Sean Englbrecht was released in 2013 and is presently registered as a Level 1 sex offender residing in Phoenix.

Postscript

The arrest of a law enforcement officer anywhere is a stain on all of us everywhere.  In the Phoenix PD, the Bunten story was disturbing news. I am grateful to the Glendale PD for capturing this predator and also grateful to the brave family who reported him. 

In initially hiring rookie Bunten, Phoenix PD appears to have performed the due diligence that their processes required. In retrospect, they probably regretted not terminating him after his use of force violation.  

Personally, I felt like our AZ ICAC unit was fortunate to have luckily de-selected the four-year veteran Bunten early in our process. Had we brought in a predator to a unit whose job it is to catch predators, it would have made this bad situation even worse.

___________________________________________
Please buy a coffee at the link below for our excellent editor Ava Gozo 


___________________________________________

___________________________________________

Disclaimer:

This information is intended for research and educational purposes and does not constitute political advocacy, legal advice, financial advice, or promotion of any illegal, harmful, or unsafe activities. This content is not designed to violate Google policies, including—but not limited to the following:

  • No Promotion of Violence or Dangerous Acts: This post does not encourage, promote, or glorify violence, criminal activity, or harmful acts.
  • No Hateful, Derogatory, or Adult Content: Content herein does not contain or endorse hate speech, harassment, discrimination, sexually explicit material, or offensive language.
  • No Circumvention or Unauthorized Techniques: All mentions of policies, techniques or procedures are for educational awareness and are not intended to enable or facilitate unauthorized activity.
  • No Policy Violations Related to Privacy or Data Collection: This blog complies with Google AdSense requirements regarding user privacy and does not misuse personal information.
  • No Political Advocacy: This blog does not advocate for, endorse, or oppose any particular political positions, candidates, or parties, and aims to remain neutral on political matters.
  • No Sales Links: Links to other sites are not product promotions.

This site strives for compliance with Google Policies, content standards, and legal requirements.

Wednesday, September 17, 2025

Cybersecurity: Your PII At-Risk


Hostile Acts in the Data Spheres: The Battles for Your PII

By Dr. Frank Kardasz, MPA, Ed.D.
Editor: Ava Gozo
December 24, 2021 (revised September 17, 2025)


The relentless barrage of cybercrimes—data breaches, doxing, deepfakes, identity thefts, intrusions, and malware—constitutes a continual assault on efforts to preserve personal information, freedom, and finances. Leak-prone storage, widespread surveillance (both lawful and unlawful), and ineffective regulations further these risks. As the populace succumbs to the rise of relentless data collection, the monetization, politicization, and weaponization of information is an alarming and wicked menace. This article discusses some core issues and concludes with resources for defense and prevention.


Some Terms of (the Dark) Art

For newcomers to the world of data compromise and cyber misdeeds, here are definitions for four key terms: 

💥 Personally Identifiable Information (PII) 💥

Personally Identifiable Information (PII) is data that, alone or in combination with other information, can identify an individual (Investopedia, 2021). Examples include date and place of birth, social security number, addresses, account information, maiden names, pet names, schools attended, and graduation dates. PII is often at risk through accidental or intentional leaks. Many people do not realize how vulnerable their PII really is (PYMNTS, 2018).


💥 Phishing 💥

Phishing is an exploit where perpetrators impersonate reputable businesses or people to acquire sensitive information, such as credit card numbers and passwords (Techopedia, 2021). Tactics are often disguised as friendly social media questions, gradually harvesting personal details.

💥 Doxing 💥

Doxing involves retrieving, hacking, and publishing private information such as names, addresses, and phone numbers. Motivations vary; coercion is a common one (Techopedia, 2021). Attackers may threaten to publish doxed information unless a ransom is paid.

💥 Deepfakes 💥

Deepfakes, or synthetic content, use AI and advanced imaging to falsify video and audio, making people appear to say or do things they never did (Techopedia, 2021). Celebrity and political deepfakes are common, but any publicly available image could be targeted.


Some Users Place Themselves at Risk

In the pursuit of fame, fortune, or recognition, some individuals—both young and old—overexpose personal and familial details on social media while striving to become "influencers." This makes them vulnerable to tailored exploits by data harvesters. Cases of sextortion and other social harms are an increasing concern across the United States.

Depending on geopolitics, exposing personal data and wealth online can have serious repercussions. For example, in China, flaunting wealth online can lead to government censorship (Wang, 2021).


Unwitting Victims

Not all victims are careless. Often, PII is compromised simply by being in the wrong place at the wrong time and then exploited for malicious acts.

Sunday, September 14, 2025

Cybersecurity: 50 Tips for Personal & Business Protection

Cybersecurity is no longer optional—it’s essential. Cyber threats such as phishing, ransomware, andidentity theft continue to rise, impacting both individuals and businesses. Implementing solid cybersecurity practices can strengthen your online safety and protect sensitive data.

This guide provides 50 cybersecurity tips to improve your security posture, covering personal safety, workplace security, data protection, and more.


General Cybersecurity Tips

Understand Cybersecurity Risks: Anyone can be a target of a cyberattack, not just large organizations.

  • Use Strong, Unique Passwords: Create complex passwords and avoid reusing them. 
  • Enable Two-Factor Authentication (2FA): Adds an extra login barrier against credential theft. 
  • Keep Software Updated: Updates fix vulnerabilities and prevent malware infections. 
  • Back Up Data Regularly: Use encrypted cloud storage or external drives. 
  • Avoid Public Wi-Fi for Banking or Work: Use a VPN for secure browsing. 
  • Beware of Phishing Emails: Always double-check the sender before clicking links.
  • Secure Your Home Wi-Fi: Change default router credentials and use WPA3 encryption.
  • Use Antivirus/Anti-Malware Software: Select reputable security solutions.
  • Check Privacy Settings: Manage what information you share on social media.
 

Device Security

  • Lock devices with strong PINs or biometrics.
  • Avoid public charging stations—carry your own cables and adapters.
  • Use a standard user account instead of an admin account for daily tasks.
  • Encrypt sensitive files to prevent unauthorized access.
  • Regularly patch IoT devices and change default credentials.


Email and Internet Use

  • Double-check sender information to avoid email spoofing. 
  • Never click on unknown links. 
  • Use secure, up-to-date browsers. 
  • Clear cache and cookies frequently. 
  • Download apps only from trusted marketplaces.
 

Workplace Cybersecurity

  • Train employees on phishing awareness and password hygiene. 
  • Use multi-factor authentication (MFA) for company logins.
  • Establish and update written security policies.
  • Perform regular penetration testing and security audits.
  • Rely on encrypted communication tools for business.
 

Data Protection

  • Enforce "minimum necessary" access to internal files.
  • Monitor data transfers to detect shadow IT usage. 
  • Apply data loss prevention (DLP) tools.
  • Encrypt and secure cloud-stored files.
  • Update written policies to reflect new threat landscapes.
 

Incident Response

  • Develop a written incident response plan.
  • Train with simulated breach scenarios.
  • Encourage instant reporting of suspicious behaviors.
  • Contain attacks quickly to minimize damage.
  • Keep clients, regulators, and partners informed in case of breaches.


Physical Cybersecurity

  • Secure physical access controls in workspaces.
  • Install CCTV and remote monitoring for critical areas.
  • Shred sensitive records before disposal.
  • Deploy badge-based entry systems.
  • Implement MDM (mobile device management) for company smartphones.
 

Advanced Cybersecurity Measures 

  • Shift toward a Zero Trust Architecture.
  • Deploy EDR (Endpoint Detection and Response) tools.
  • Use network segmentation to isolate sensitive systems.
  • Integrate threat intelligence feeds. 
  • Partner with peer organizations to share best practices.
 

Personal Cybersecurity Practices

  • Disconnect when devices are not in use. 
  • Use trusted password managers like 1Password or Bitwarden. 
  • Be skeptical of free services that seem too good to be true. 
  • Check your online banking and email account history regularly. 
  • Research tools and apps before installation.
 

Why Cybersecurity Best Practices Matter

Implementing even a few of these cybersecurity tips can drastically reduce exposure to digital threats. From password safety to incident response readiness, both individuals and organizations must take proactive steps to minimize risk.

For additional resources, also read:

  • How to Protect Against Phishing Attacks 
  • Securing IoT Devices at Home and Work 
  • Top Cybersecurity Tools for Small Businesses
 
Please buy a coffee at the link below for our excellent editor Ava Gozo 


___________________________________________

Disclaimer:

This information is intended for research and educational purposes and does not constitute political advocacy, legal advice, financial advice, or promotion of any illegal, harmful, or unsafe activities. This content is not designed to violate Google policies, including—but not limited to the following:

  • No Promotion of Violence or Dangerous Acts: This post does not encourage, promote, or glorify violence, criminal activity, or harmful acts.
  • No Hateful, Derogatory, or Adult Content: Content herein does not contain or endorse hate speech, harassment, discrimination, sexually explicit material, or offensive language.
  • No Circumvention or Unauthorized Techniques: All mentions of policies, techniques or procedures are for educational awareness and are not intended to enable or facilitate unauthorized activity.
  • No Policy Violations Related to Privacy or Data Collection: This blog complies with Google AdSense requirements regarding user privacy and does not misuse personal information.
  • No Political Advocacy: This blog does not advocate for, endorse, or oppose any particular political positions, candidates, or parties, and aims to remain neutral on political matters.
  • No Sales Links: Links to other sites are not product promotions.

This site strives for compliance with Google Policies, content standards, and legal requirements.

Cybersecurity: SIM Card Swapping: How to Prevent Account Takeover Fraud


SIM card swapping fraud can happen to anyone. If criminals succeed, they can steal a phone number, take over accounts, and access bank or cryptocurrency funds. Most people know to be cautious about phishing emails, but SIM swapping scams often go unnoticed and can be devastating?

What Is SIM Card Swapping?

Your phone number is stored on a small SIM chip in your device. If a criminal convinces your provider to move your number to a SIM card they control, they gain access to your calls and texts. This matters because:

  • Many companies send login codes and password resets by text.
  • Scammers intercept those codes and can break into important accounts.
  • Victims face stolen money, identity theft, and recovery headaches.

How SIM Card Swapping Happens

  • Fraudsters first gather details about a victim—address, date of birth, account numbers:
  • They call the mobile carrier, pretending to be the victim.
  • The victim’s phone stops working, and the scammer now controls the number.
  • They claim the phone is lost or broken, requesting a new SIM card.
  • The carrier activates that SIM, transferring the victim’s number.
  • SIM card swapping can occur with both physical SIM cards and eSIMs, as the underlying attack involves transferring a phone number or carrier profile rather than the physical card itself. 

Signs Your SIM Card Was Swapped

Watch for these red flags:

  • Phone suddenly has no service for calls or texts.
  • Family or friends say somebody else answered, or call attempts fail.
  • Alerts from bank, email, or other accounts about requested password changes

If any occur, contact your carrier immediately.


How to Protect Yourself

Defend against SIM swap fraud with these best practices:

  • Add a PIN or password to your mobile account: This makes impersonation harder for attackers.
  • Use app-based login codes (Google Authenticator, Authy) instead of SMS: These cannot be intercepted by SIM swapping.
  • Keep personal info private: The less a criminal knows, the harder their attack.
  • Monitor your accounts: Enable alerts for suspicious logins or money movements.

Set Up Extra SIM Protection with Your Carrier

Major carriers provide additional security—see their pages to set up SIM locks, account PINs, and fraud alerts:

  • Google Fi: Enable SIM Number Lock
  • AT&T: Add a wireless passcode
  • Verizon: Set an account PIN
  • T-Mobile: Enable account security PIN/Passcode

Be sure to visit your carrier's support pages for step-by-step instructions.


Final Takeaway

SIM card swapping isn’t about hacking a phone—it’s social engineering aimed at telecom providers. A few minutes setting up SIM locking and carrier PINs can prevent account takeover and protect identity. Think of SIM protection as adding a deadbolt before trouble happens—most criminals move on if it looks too difficult.

Take time today to secure your SIM and implement strong account protection. It could save stress, money, and time. 

References

  1. https://www.kaspersky.com/resource-center/threats/sim-swapping
  2. https://www.trumarkonline.org/blog/sim-swapping-and-port-out-fraud/
  3. https://www.mcafee.com/blogs/mobile-security/what-is-sim-swapping/
  4. https://www.itgovernance.eu/blog/en/scammers-are-using-seo-to-strengthen-phishing-attacks
  5. https://www.verizon.com/about/account-security/sim-swapping
  6. https://www.thomsonreuters.com/en-us/posts/corporates/sim-swap-fraud/

 ___________________________________________

Please buy a coffee at the link below for our excellent editor Ava Gozo 


___________________________________________

Disclaimer:

This information is intended for research and educational purposes and does not constitute political advocacy, legal advice, financial advice, or promotion of any illegal, harmful, or unsafe activities. This content is not designed to violate Google policies, including—but not limited to the following:

  • No Promotion of Violence or Dangerous Acts: This post does not encourage, promote, or glorify violence, criminal activity, or harmful acts.
  • No Hateful, Derogatory, or Adult Content: Content herein does not contain or endorse hate speech, harassment, discrimination, sexually explicit material, or offensive language.
  • No Circumvention or Unauthorized Techniques: All mentions of policies, techniques or procedures are for educational awareness and are not intended to enable or facilitate unauthorized activity.
  • No Policy Violations Related to Privacy or Data Collection: This blog complies with Google AdSense requirements regarding user privacy and does not misuse personal information.
  • No Political Advocacy: This blog does not advocate for, endorse, or oppose any particular political positions, candidates, or parties, and aims to remain neutral on political matters.
  • No Sales Links: Links to other sites are not product promotions.

This site strives for compliance with Google Policies, content standards, and legal requirements.

Saturday, September 13, 2025

AI: Understanding False-Positive Hallucinations in AI Research: Implications for Academic Integrity

Generative artificial intelligence tools have revolutionized academic research, offering valuable support while introducing new challenges. Among the most pressing is the phenomenon of false-positive hallucinations. This article analyzes the nature, prevalence, and impact of hallucinations in AI-assisted academic work—and shares practical strategies for educators and students to address them.

What Are False-Positive Hallucinations in AI Research?

AI hallucinations occur when large language models confidently produce content that appears factual and authoritative, but is actually incorrect or fabricated. In academic contexts, false-positive hallucinations refer to AI-generated information that is presented as legitimate scholarly content, despite being entirely invented.

  • Hallucinations may be categorized by degree and type—such as acronym ambiguity, numeric errors, or fabricated references.
  • Unlike deliberate human misinformation, these errors result from underlying probabilistic processes in AI models.

The most alarming academic hallucinations involve fake citations and references. AI can generate plausible author names, credible article titles, and authentic-looking journal details that do not exist in reality.

Common Types of Academic Hallucinations

  • Reference Fabrication: AI creates non-existent sources and citations.
  • Fact Fabrication: AI invents false statistics or study outcomes.
  • Expert Fabrication: AI attributes quotes or opinions to fictional or unrelated authorities.
  • Methodological Fabrication: AI describes studies or experiments that never occurred.

How Prevalent Are AI Hallucinations in Academia?

False-positive hallucinations are a widespread issue across academic domains. Studies found that up to 69% of medical references generated by ChatGPT are fabricated, with many appearing professionally formatted. Leading legal AI tools also show hallucination rates between 17% and 33%, despite claims of being hallucination-free. Preliminary reviews reveal frequent generation of convincing—but entirely fictional—peer-reviewed sources.[2][3]

Notable Real-World Examples

Medical Research

ChatGPT has generated plausible journal article citations—complete with real researcher names—that simply do not exist. Such hallucinations pose a risk to medical decision-making if accepted as valid sources.

Legal Research

AI-powered legal research tools have created citations to fabricated court cases. These hallucinations often blend seamlessly with factual content, making them hard for experts and instructors to identify.

Academic Writing

AI has also invented fake conferences, institutions, and journal articles formatted with realistic details, misleading users and undermining academic credibility.

Should Students Be Required to Provide URLs for Sources?

Arguments in Favor

  • Direct URLs help verify the existence of sources.
  • Reduce risk of accepting hallucinated material.
  • Streamline instructors’ source checking.
  • Encourage lifelong habits of verification.

Arguments Against

  • Print and paywalled sources may not have URLs.
  • Could bias research toward online materials.
  • Increases the work required for students and instructors.
  • URL availability does not guarantee accuracy.

Balanced Solution

Require URLs, DOIs, or ISBNs for major claims where available—but teach broader verification and critical thinking alongside transparency about AI involvement.

Practical Strategies for Students

1. Verify Every Citation

  • Check references using library databases or search engines.
  • Cross-check key facts with multiple reliable sources.
  • Highlight statistical claims and ensure their credibility.
  • Use in-text citations linked to a comprehensive References section.

2. Use AI as a Supplement

  • Leverage AI for vocabulary and brainstorming, not complete research generation.
  • Critically review and refine AI suggestions.

3. Develop Critical Evaluation Skills

  • Question unlikely or overly perfect findings.
  • Probe for unsourced assumptions.
  • Ensure internal consistency across arguments and data.

4. Transparently Declare AI Use

  • State which parts of the work were assisted by AI.
  • Describe how references and facts were verified.

5. Combine Multiple Tools and Approaches

  • Compare outputs between different AI tools.
  • Use specialized hallucination detectors when available.
  • Seek human feedback from peers or instructors.

Conclusion: Balancing Integrity and Innovation

AI hallucinations present a significant challenge to academic integrity, threatening the reliability of research across fields. Rather than prohibiting AI, institutions should cultivate policies emphasizing transparency, verification, and critical skill-building. By combining the strengths of AI with rigorous human oversight, academia can continue to innovate—without sacrificing honesty and credibility. 

Cybersecurity: Securing Your Network: The TP-Link Controversy & Router Safety Tips

TP-Link, a major Chinese router manufacturer, is under investigation by U.S. authorities over national security concerns. A possible ban on its products in the U.S. is being considered, raising questions about cybersecurity, market dominance, and router safety for both home and business users.

Key Points of the TP-Link Investigation

  • Market Dominance: TP-Link controls about 65% of the U.S. market for home and small business routers .

  • Government Usage: TP-Link routers are deployed across federal agencies, including the Department of Defense and NASA .

  • Cybersecurity Concerns: Reports suggest Chinese hackers have compromised thousands of TP-Link routers to launch attacks on Western organizations .

  • Pricing Strategy: The DOJ is examining whether TP-Link’s below-market pricing strategy violates antitrust laws .

Potential Implications of a TP-Link Ban

If a ban on TP-Link devices is implemented, the U.S. router market could face major disruptions. A policy shift may happen as early as next year under the new administration . Such a move would leave millions of U.S. households and businesses searching for alternative router solutions.

Security Risks and Vulnerabilities in TP-Link Routers

  • Hacking Reports: Microsoft confirmed that a Chinese hacking group used compromised TP-Link routers in attacks on North American and European organizations .

  • CISA Alerts: The U.S. Cybersecurity and Infrastructure Security Agency identified vulnerabilities in TP-Link devices that could allow remote code execution .

  • Persistent Flaws: Researchers note that TP-Link routers often ship with unpatched security flaws, drawing criticism over poor vendor response .

How to Check if Your Router is Compromised

With router-based cyberattacks becoming more common, it’s important to detect signs of compromise early. Look out for:

  • Unexplained slow internet speeds

  • Difficulty logging into your router’s admin settings

  • Browser redirects to strange websites

  • Suspicious network activity during unusual hours

  • Unknown devices connected to your network

  • Unfamiliar software appearing on connected devices

Steps to Check Your Router

  1. Log into your router’s admin panel and review logs for suspicious activity .

  2. Check the device list for unknown or unauthorized entries .

  3. Verify DNS settings to ensure they haven’t been changed .

If you suspect a compromise:

  • Change the administrator password immediately.

  • Update your router’s firmware to the latest version.

  • Consider performing a factory reset for a clean start .

Final Takeaway: Protecting Your Network

The TP-Link controversy adds to growing concerns about router security and foreign-manufactured hardware. Regardless of brand, users should take proactive cybersecurity measures, keep firmware updated, and regularly monitor their networks for suspicious behavior.

Staying informed ensures that both home networks and businesses remain protected against evolving cyber threats.


References

Slashdot – U.S. Weighs Banning TP-Link Routers
Reuters – U.S. Considers Ban on TP-Link
NordVPN – Router Malware
Business Insider – TP-Link Pricing Debate
Ars Technica – U.S. Weighs Ban Over Security Concerns
Netgear Community – Has My Router Been Hacked?
Asia Financial – TP-Link Ban Report
CBS News – TP-Link Router Ban Considered
Keeper Security – Signs of Hacked Router
BleepingComputer – U.S. Considers Ban on TP-Link

_________________________

Please buy a coffee at the link below for our excellent editor Ava Gozo 


___________________________________________

Disclaimer:

This information is intended for research and educational purposes and does not constitute political advocacy, legal advice, financial advice, or promotion of any illegal, harmful, or unsafe activities. This content is not designed to violate Google policies, including—but not limited to the following:

  • No Promotion of Violence or Dangerous Acts: This post does not encourage, promote, or glorify violence, criminal activity, or harmful acts.
  • No Hateful, Derogatory, or Adult Content: Content herein does not contain or endorse hate speech, harassment, discrimination, sexually explicit material, or offensive language.
  • No Circumvention or Unauthorized Techniques: All mentions of policies, techniques or procedures are for educational awareness and are not intended to enable or facilitate unauthorized activity.
  • No Policy Violations Related to Privacy or Data Collection: This blog complies with Google AdSense requirements regarding user privacy and does not misuse personal information.
  • No Political Advocacy: This blog does not advocate for, endorse, or oppose any particular political positions, candidates, or parties, and aims to remain neutral on political matters.
  • No Sales Links: Links to other sites are not product promotions.

This site strives for compliance with Google Policies, content standards, and legal requirements.

Technology: What Is the Tech Stack?

A Tech Stack is the collection of technologies that powers modern websites, applications, and digital businesses. Whether developing software, launching SaaS, or deploying cloud solutions, understanding tech stacks is important for building technology platforms. In this blogpost, learn about tech stack layers, vendor examples, and why choosing the right stack matters for businesses.

Friday, September 12, 2025

Ethics: Moral Ambivalence - Former CIA Agent Discusses Providing CSAM as a "Specialized Gift" to Foreign Targets

Andrew Bustamante states that he and his wife are former CIA agents. He has posted many videos and been interviewed widely on YouTube where he describes his exploits and explains some CIA operations.

In the following excerpt from a video (https://www.youtube.com/watch?v=LkOwKkivJ1E) he describes how the CIA would facilitate CSAM as a "specialized gift" to foreign targets who wanted it.

Tuesday, August 19, 2025

Leadership: The Dark Side of Leadership and Accountability: The Ethics of Plausible Deniability

Plausible Deniability

Plausible deniability is the capacity of an individual—often a senior official or leader—to credibly deny knowledge of or responsibility for illicit or unethical actions carried out by subordinates or associates, due to a lack of direct evidence linking them to those actions. The denial remains "plausible" because the circumstances or absence of proof prevent conclusive attribution, even if the individual was involved or willfully ignorant. This practice often involves deliberately structuring relationships and communications to ensure deniability, enabling those in authority to escape blame or legal consequences if activities are exposed.

Examples

Creating plausible deniability in the context of information technology and digital forensics may involve technical mechanisms or strategies that enable a person to credibly deny knowledge of, or control over, certain data or actions. Examples include:

  • Deniable Encrypted File Systems: Software such as VeraCrypt enables users to create hidden encrypted volumes within other encrypted containers. If compelled to reveal a password, a user can provide access to the “outer” volume while denying the existence of the “hidden” one. The existence of the hidden volume typically cannot be proven through standard forensic methods if configured correctly.
  • Hidden Operating Systems: VeraCrypt also supports the creation of a hidden OS within an encrypted partition. If a device is seized, the user can provide credentials for the decoy OS while maintaining plausible deniability about the hidden OS. Forensic detection becomes difficult if the hidden OS leaves no traces outside its partition.
  • Deniable Communication Protocols: Messaging solutions like Signal employ deniable authentication. Even if a transcript of communications is captured, it may be difficult for a third party to decrypt and prove who authored or participated in a conversation.
  • Anonymous Accounts: The creation and use of anonymous online accounts and pseudonymous email addresses allow users to plausibly deny authorship or control of content, as nothing is directly tied to their real identity if all technical precautions are maintained.
  • Obfuscation and Metadata Removal: Removing or falsifying metadata from documents, images, or other digital evidence can make attribution of authorship or origin difficult, supporting plausible deniability for content creators or transmitters.

These methods can be used to protect privacy and sensitive data, but they can also be abused to frustrate investigations and provide cover for illicit activity.

Challenging and Disproving Denials

Plausible deniability can be legally challenged or disproved in some situations, particularly when there is sufficient evidence to show that a person in authority did, in fact, have knowledge of or involvement in the questionable actions. Common scenarios in which plausible deniability fails or is overcome in court include:

  • Direct or Circumstantial Evidence: If investigators or prosecutors uncover direct evidence (such as emails, messages, recorded conversations, or documents) tying the individual to the actions, deniability collapses. Even strong circumstantial evidence can establish knowledge or intent, undermining plausible deniability.
  • Command Responsibility Doctrine: In military, law enforcement, or organizational contexts, leaders can be held legally responsible for the actions of subordinates if they knew or should have known about illegal acts and failed to prevent or punish them. Plausible deniability is not a defense if it can be shown that an official intentionally remained ignorant or deliberately failed to supervise.
  • Willful Blindness: Courts may challenge claims of plausible deniability if they find that a person “deliberately avoided” acquiring knowledge, a doctrine known as willful blindness. A person cannot escape liability simply by intentionally avoiding learning about potentially illegal activities.
  • Patterns of Conduct: Repeated patterns of behavior, communication, or organizational structure can indicate a deliberate attempt to insulate higher-ups from information while still enabling or authorizing misconduct.
  • Pleading Standards in Civil Cases: Under modern pleading standards (see Twombly, Iqbal), allegations must be plausible, not just possible. If a plaintiff presents enough factual content to allow an inference that the defendant was aware or involved, plausible deniability can be challenged at the motion to dismiss stage.
  • Legal Precedents: In cases such as Ashcroft v. Iqbal, the U.S. Supreme Court addressed whether defendants could be held liable if they were aware of subordinates’ actions, even if they denied direct involvement. The courts look for factual allegations that make liability plausible, not just possible.

Ashcroft v. Iqbal

In Ashcroft v. Iqbal (2009), the U.S. Supreme Court addressed what makes government officials’ liability claims “plausible” rather than merely possible. Javaid Iqbal alleged that officials, including former Attorney General Ashcroft and FBI Director Mueller, discriminated against him after 9/11. Iqbal lost at the Supreme Court. The Court held that Iqbal’s complaint did not contain enough specific factual content to plausibly suggest that Ashcroft and Mueller personally adopted discriminatory detention policies after 9/11. The Court found that Iqbal’s claims were based mostly on general accusations and lacked specific factual content tying Ashcroft and Mueller to unconstitutional conduct. The Court ruled that plausible deniability could hold if a complaint alleges only that high-level officials “knew of, condoned, and willfully and maliciously agreed to subject” someone to abuse “as a matter of policy.” The complaint must contain enough factual content to plausibly suggest a direct link, not just a possible inference, to overcome denials and survive a motion to dismiss. Plausible deniability is therefore protected unless the assertions are substantiated by facts allowing a reasonable inference of personal responsibility.

Summary

In summary, plausible deniability is not absolute—legal systems have developed doctrines and standards (such as command responsibility, willful blindness, and specific pleading requirements) to pierce denials when sufficient evidence exists that a person knew of or participated in the conduct in question.

Associated Resources

  • Studies discussing paradoxical leadership behavior, which often touches on the dark side of leadership and accountability, appear in peer-reviewed leadership and organizational psychology publications. See: Lee, A., Lyubovnikova, J., Zheng, Y., & Li, Z. F. (2023). Paradoxical leadership: A meta-analytical review. Frontiers in Organizational Psychology, 1, 1229543. https://doi.org/10.3389/forgp.2023.1229543
  • The doctrine of “command responsibility” is an important subject in legal scholarship, especially in international law and military law reviews. See: Chantal Meloni, Command Responsibility: Mode of Liability for the Crimes of Subordinates or Separate Offence of the Superior?, Journal of International Criminal Justice, Volume 5, Issue 3, July 2007, Pages 619–637, https://doi.org/10.1093/jicj/mqm029
  • Foundational legal analysis for willful blindness and command responsibility can be found in discussions of U.S. v. Jewell, Ashcroft v. Iqbal, and Twombly v. Bell Atlantic Corp. See legal journals and Supreme Court case analyses for detailed precedent.

______________________________________

Please buy a coffee at the link below for our excellent editor Ava Gozo 


___________________________________________

Disclaimer:

This information is intended for research and educational purposes and does not constitute political advocacy, legal advice, financial advice, or promotion of any illegal, harmful, or unsafe activities. This content is not designed to violate Google policies, including—but not limited to the following:

  • No Promotion of Violence or Dangerous Acts: This post does not encourage, promote, or glorify violence, criminal activity, or harmful acts.
  • No Hateful, Derogatory, or Adult Content: Content herein does not contain or endorse hate speech, harassment, discrimination, sexually explicit material, or offensive language.
  • No Circumvention or Unauthorized Techniques: All mentions of policies, techniques or procedures are for educational awareness and are not intended to enable or facilitate unauthorized activity.
  • No Policy Violations Related to Privacy or Data Collection: This blog complies with Google AdSense requirements regarding user privacy and does not misuse personal information.
  • No Political Advocacy: This blog does not advocate for, endorse, or oppose any particular political positions, candidates, or parties, and aims to remain neutral on political matters.
  • No Sales Links: Links to other sites are not product promotions.

This site strives for compliance with Google Policies, content standards, and legal requirements.

Monday, July 07, 2025

Cybersecurity: Ubiquitous Technical Surveillance & Countermeasures: Existential Threats & Mitigations

Ubiquitous Technical Surveillance (UTS) is the widespread collection and analysis of data from various sources—ranging from visual and electronic devices to financial and travel records—for the purpose of connecting individuals, events, and locations. 

This surveillance poses risks to government operations, business organizations, and individuals alike, threatening to compromise sensitive investigations, personal privacy, and organizational security. The surprising findings of a recent audit of FBI techniques to address UTS further heighten the need for awareness and response to the threats. 

As the sophistication and reach of surveillance technologies continue to grow, understanding the nature of UTS and implementing effective Technical Surveillance Countermeasures (TSCM) is essential for safeguarding sensitive information and ensuring operational integrity. This work explores UTS and TSCM and suggests mitigation strategies to combat the threats.

Overview

Ubiquitous Technical Surveillance (UTS) refers to the pervasive collection and analysis of data including visual, electronic, financial, travel, and online for the purpose of connecting individuals, events, and locations. The significance of the threats is outlined in a recently declassified but heavily redacted DOJ/OIG audit of the FBI's response to UTS (DOJ, 2025). Based on the number of redactions, particularly from the CIA's section of the report, it is reasonable to imagine that many incidents have occurred that have not been reported to the public.

Technical Surveillance Countermeasures (TSCM) refers to specialized procedures and techniques designed to detect, locate, and neutralize unauthorized surveillance devices and eavesdropping threats. TSCM is commonly known as a "bug sweep" or "electronic counter-surveillance" and is used to protect sensitive information from being intercepted by covert listening devices, hidden cameras, or other forms of technical surveillance (REI, 2025), (Conflict International Limited, 2025).

UTS Devices, Data Sources, & Risks

Technical surveillance data collection can occur through a variety of devices and data sources including the following:

UTS is recognized as a significant and growing threat to government, business organizations, and individuals, with the potential to compromise investigations, business operations, and personal safety. When the collected technical surveillance information is in the wrong hands and used for nefarious purposes, harm can result.

UTS Threats

What are the UTS threats?

  • Significance: Described as an “existential threat” by the Central Intelligence Agency (CIA) due to its ability to compromise sensitive operations and personal safety (DOJ, 2025, p.4).

Risks:

  • Compromise of investigations, personnel PII, and sources (DOJ, 2025)
  • Exposure of operational details
  • Threats to personal and organizational security
  • Corporate espionage (Pinkerton, 2022)

Real-World UTS Scenarios

The following incidents are a sample of situations involving UTS.

  • Cartel Tracking via Phones and Cameras: Criminals exploited mobile phone data and city surveillance cameras to track and intimidate law enforcement and informants (DOJ, 2025, p.18).
  • Organized Crime and Phone Records: Crime groups used call logs and online searches to identify informants (DOJ, 2025, p.18).
  • Financial Metadata De-Anonymization: Commercial entities re-identified individuals from anonymized transaction data. Though this data is anonymized, in 2015, researchers from the Massachusetts Institute of Technology found that with the data from just four transactions, they could positively identify the cardholder 90% of the time. (DOJ, 2025, p.17).
  • Travel Data Correlation: Adversaries used travel records to reveal covert meetings and operational activities (DOJ, 2025, p.1).
  • Online Activity Analysis: Aggregated web and social media data to build detailed personal profiles (DOJ, 2025, p.1).
  • Visual Surveillance: Use of CCTV and smart devices for real-time tracking and event reconstruction.
  • Electronic Device Tracking: Exploitation of device signals and unique identifiers for location tracking.
  • Combined Data Exploitation: Overlaying multiple data sources to establish “patterns of life.”
  • Commercial Data Brokers: Purchase of large datasets for profiling and targeting.
  • Compromised Communications: Poorly secured communications exposing sensitive activities.

UTS Response: Organizational Challenges - FBI

The FBI identified UTS as an issue impacting the Bureau. However, a recently unclassified audit of the FBI's approach to UTS by the Office of Inspector General (OIG) identified several challenges and areas for improvement in the FBI's approach (DOJ, 2025, p.4).

OIG Audit of the FBI's Efforts (DOJ, 2025)

  • Red Team Analysis: Initial FBI efforts were high-level and did not fully address known vulnerabilities.
  • FBI Strategic Planning: Ongoing development, but lacking clear authority and coordination.
  • Training Gaps: Basic UTS training is mandatory for FBI personnel, but advanced training is limited and optional.
  • Incident Response: FBI Data breaches revealed policy gaps and lack of coordinated response.
  • Recommendations: The FBI needs comprehensive vulnerability documentation, strategic planning, clear authority, and expanded training.

Countermeasures & Best Practices

Combating the threats from UTS is a daunting challenge. Several steps can be taken to mitigate the threats.

Scenario-Specific Steps

Suggested General Countermeasures

  • Regular training on digital hygiene and counter-surveillance
  • Encryption of sensitive data and communications
  • Physical security for sensitive locations and devices
  • Vigilance and behavioral adaptation to signs of surveillance
  • Technical Surveillance Countermeasures (REI, 2025), (Conflict International Ltd, 2025), (EyeSpySupply, 2023).

Training & Awareness (DOJ, 2025)

  • Basic UTS Awareness: Should be mandatory for all FBI personnel.
  • Advanced UTS Training: Recommended for high-risk FBI roles; should be expanded and resourced.
  • Continuous Learning: Stay updated on emerging threats and countermeasures.

Incident Response Recommendations from the OIG Audit of the FBI (DOJ, 2025)

  • FBI should establish clear lines of authority for UTS incidents.
  • FBI should develop and rehearse coordinated response plans.
  • FBI should regularly review and update internal controls and policies.

Summary

The growing sophistication and reach of surveillance technologies have made UTS a threat to government operations, business organizations, and individuals. Real-world incidents demonstrate how adversaries exploit mobile phone data, surveillance cameras, financial transactions, and travel records to compromise investigations, expose operational details, and threaten personal and organizational security.

The FBI, recognizing UTS as an existential threat, has faced challenges such as insufficient planning, limited training, and gaps in incident response.

Technical Surveillance Countermeasures (TSCM), including procedures like bug sweeps and electronic counter-surveillance, are tools for detecting and mitigating unauthorized surveillance devices. Best practices for mitigation include regular training, encryption, physical security, and continuous awareness of emerging threats.

Conclusion

The risks posed by UTS are immediate and evolving, with the potential to undermine investigations, compromise privacy, and threaten organizational integrity. Effective countermeasures require a combination of technical solutions, organizational policies, and training. The findings of the OIG audit of the FBI highlight the need for clear authority, coordinated response plans, and regular updates to internal controls. As surveillance technologies continue to advance, adopting a proactive and comprehensive approach to counter-surveillance is important for safeguarding information and maintaining operational security.

References

Conflict International Ltd. (2025, June). Bug Sweeps (TSCM): Protecting Against AirTag Stalking and Modern Surveillance. https://conflictinternational.com/news/bug-sweeps-tscm-protecting-against-airtag-stalking-and-modern-surveillance

DOJ. (2025, June). Audit of the Federal Bureau of Investigation's Efforts to Mitigate the Effects of Ubiquitous Technical Surveillance. Department of Justice, Office of the Inspector General. https://oig.justice.gov/sites/default/files/reports/25-065.pdf

EyeSpySupply. (2023, December). The Importance of TSCM Equipment for Security. Blog. https://blog.eyespysupply.com/2023/12/29/the-importance-of-tscm-equipment-for-security/

Pinkerton. (2022, July). Technical Surveillance Countermeasures to Prevent Corporate Espionage. https://pinkerton.com/our-insights/blog/technical-surveillance-countermeasures-to-prevent-corporate-espionage

REI. (2025). Research Electronics Institute. TSCM Equipment and Training. https://reiusa.net/

___________________________________________

Please buy a coffee at the link below for our excellent editor Ava Gozo 


___________________________________________

Disclaimer:

This information is intended for research and educational purposes and does not constitute political advocacy, legal advice, financial advice, or promotion of any illegal, harmful, or unsafe activities. This content is not designed to violate Google policies, including—but not limited to the following:

  • No Promotion of Violence or Dangerous Acts: This post does not encourage, promote, or glorify violence, criminal activity, or harmful acts.
  • No Hateful, Derogatory, or Adult Content: Content herein does not contain or endorse hate speech, harassment, discrimination, sexually explicit material, or offensive language.
  • No Circumvention or Unauthorized Techniques: All mentions of policies, techniques or procedures are for educational awareness and are not intended to enable or facilitate unauthorized activity.
  • No Policy Violations Related to Privacy or Data Collection: This blog complies with Google AdSense requirements regarding user privacy and does not misuse personal information.
  • No Political Advocacy: This blog does not advocate for, endorse, or oppose any particular political positions, candidates, or parties, and aims to remain neutral on political matters.
  • No Sales Links: Links to other sites are not product promotions.

This site strives for compliance with Google Policies, content standards, and legal requirements.