Friday, June 13, 2025

Recruiters Targeted by Fake Job Seekers in Malware Scam

Recruiters are facing a cyber threat as financially motivated hackers, notably the FIN6 group (also known as Skeleton Spider), shift tactics to social engineering campaigns. The attackers are posing as job seekers on popular platforms like LinkedIn and Indeed, luring unsuspecting recruiters into downloading malware via fake portfolio websites.

How the Scam Works

The scam starts when cybercriminals, pretending to be legitimate job applicants, reach out to recruiters through job-hunting platforms. After initial contact, they send a follow-up phishing email that directs the recruiter to a convincing online portfolio site. These sites, often hosted on Amazon Web Services (AWS), mimic authentic job seeker pages, sometimes using plausible names associated with the applicant.

To evade automated security systems, the phishing emails do not contain clickable hyperlinks. Instead, recruiters are prompted to manually type the provided web address into their browser, which helps the attackers bypass link-detection tools[1].

The Malware: More_eggs

Once on the fake portfolio site, the recruiter is asked to complete a CAPTCHA and other checks to prove they are human, further evading automated scanners. If they proceed, they are offered a ZIP file to downloadโ€”purportedly a resume or work sample. Inside the ZIP is a Windows shortcut (.LNK) file that, when opened, executes a hidden JavaScript payload using wscript.exe. This payload connects to the attackers' command-and-control server and installs the More_eggs backdoor.

More_eggs is a modular, JavaScript-based malware-as-a-service tool that allows attackers to:

  • Remotely execute commands
  • Steal credentials
  • Deliver additional malicious payloads

Notably, More_eggs operates in the memory of the users device, making it harder for traditional antivirus solutions to detect.

Evasion Tactics

FIN6 leverages several techniques to avoid detection and takedown:

  • Anonymous Domain Registration: Domains are registered through GoDaddy with privacy services, obscuring the true identity of the registrants[1].
  • Cloud Hosting: Hosting malicious sites on AWS infrastructure provides legitimacy and resilience against quick takedowns[1].
  • Human Verification: CAPTCHAs and environmental checks ensure only real users (not automated scanners) reach the malware download stage[1].

Industry Response

AWS responded to the incident by reaffirming its commitment to enforcing its terms of service and collaborating with the security research community. The company encourages reporting of any suspected abuse through its dedicated channels for swift action.

Takeaways for Recruiters and Organizations

This campaign highlights the evolving landscape of cyber threats, where even those in hiring roles are now prime targets. Key steps for recruiters and organizations to protect themselves include:

  • Treat unsolicited portfolio links with suspicion, especially if they require manual entry into a browser.
  • Avoid downloading ZIP files or clicking on shortcut files from unknown or untrusted sources.
  • Ensure endpoint security solutions are updated and capable of detecting in-memory malware.
  • Report suspicious activity to IT or security teams immediately.

Recruiters and organization should be aware of the attacks and use caution with job applicants.

References




Thursday, June 12, 2025

Disturbing Spying Revelations: Meta/Facebook/Instagram & Yandex

Overview:

The web page https://localmess.github.io/ discloses a previously undocumented and highly invasive tracking technique used by Meta (Facebook/Instagram) and Yandex that affected billions of Android users. Researchers [4] discovered that this method covertly linked users' mobile web browsing sessions to their identities in native apps, bypassing standard privacy protections. 

The practice was active until early June 2025, when both Meta and Yandex, after being caught with their hands in the proverbial PII cookie-jar, ceased these behaviors following public disclosure [1][2][3].

Key Findings

1. Covert Web-to-App Tracking via Localhost on Android

ยท       Meta and Yandex embedded scripts (Meta Pixel and Yandex Metrica) on millions of websites.

ยท       When a user visited such a site in a mobile browser on Android, the script would communicate directly with native apps (like Facebook, Instagram, or Yandex Maps) installed on the same device.

ยท       This communication happened via localhost socketsโ€”special network ports on the device that allow apps to talk to each other without user knowledge or consent [1][3].

2. How the Tracking Worked

ยท       Meta Pixel:

o   The Meta Pixel JavaScript sent the browserโ€™s _fbp cookie (used for advertising and analytics) to Meta apps via WebRTC (using STUN/TURN protocols) on specific UDP ports (12580โ€“12585).

o   Native Facebook and Instagram apps listened on these ports in the background, received the _fbp value, and linked it to the userโ€™s app identity, effectively de-anonymizing web visits[1][3].

o   This bypassed protections like cookie clearing, incognito mode, and Android permission controls.

ยท       Yandex Metrica:

o   Yandexโ€™s script sent HTTP/HTTPS requests with tracking data to localhost ports (29009, 29010, 30102, 30103), where Yandex apps listened.

o   The apps responded with device identifiers (e.g., Android Advertising ID), which the script then sent to Yandex servers, bridging web and app identities[1].

3. Privacy and Security Implications

ยท       This method allowed companies to:

o   Circumvent privacy mechanisms such as incognito mode, cookie deletion, and even Androidโ€™s app sandboxing.

o   Link browsing habits and cookies with persistent app/user identifiers, creating a cross-context profile of the user.

o   Potentially expose browsing history to any third-party app that listened on those ports, raising the risk of malicious exploitation[1][3].

4. Prevalence

ยท       Meta Pixel was found on over 5.8 million websites; Yandex Metrica on nearly 3 million.

ยท       In crawling studies, thousands of top-ranked sites were observed attempting localhost communications, often before users had given consent to tracking cookies[1].

5. Timeline and Disclosure

ยท       Yandex has used this technique since 2017; Meta adopted similar methods in late 2024.

ยท       Following responsible disclosure to browser vendors and public reporting in June 2025, both companies stopped the practice. Major browsers (Chrome, Firefox, DuckDuckGo, Brave) have since implemented or are developing mitigations to block such localhost abuse[1][3]

Technical Details

Aspect

Meta/Facebook Pixel

Yandex Metrica

Communication Method

WebRTC STUN/TURN to UDP ports (12580โ€“12585)

HTTP/HTTPS requests to TCP ports (29009, etc.)

Data Shared

_fbp cookie, browser metadata, page URLs

Device IDs (AAID), browser metadata

Apps Involved

Facebook, Instagram

Yandex Maps, Browser, Navigator, etc.

User Awareness

None; bypassed consent and privacy controls

None; bypassed consent and privacy controls

Platform Affected

Android only (no evidence for iOS or desktop)

Android only (no evidence for iOS or desktop)

Risk of Abuse

High: enables de-anonymization, history leakage

High: enables de-anonymization, history leakage

Broader Implications

ยท       Bypassing Privacy Controls:
This method undermined the effectiveness of cookie controls, incognito/private browsing, and Androidโ€™s app isolation, showing that even sophisticated privacy tools can be circumvented by creative inter-app communications
[1][3].

ยท       Need for Platform-Level Fixes:
Browser and OS vendors are now patching this specific exploit, but the underlying issueโ€”unrestricted localhost socket accessโ€”remains a systemic risk on Android. The researchers call for stricter platform policies and user-facing controls for localhost access
[1].

ยท       User and Developer Awareness:
Most website owners were unaware their sites enabled this tracking. End-users had no indication or control over the process. The lack of transparency and documentation from Meta and Yandex is highlighted as a major concern
[1].

Conclusion

The research revealed a disturbing tracking vector that allowed Meta and Yandex to link usersโ€™ web and app identities on Android at a massive scale, defeating standard privacy safeguards. The disclosure led to rapid mitigation, but the incident underscores the need for deeper systemic changes in how browsers and mobile platforms handle inter-app communications and tracking[1][2][3]. โ€œThis tracking method defeats Android's inter-process isolation and tracking protections based on partitioning, sandboxing, or clearing client-side state.โ€[1]

1.      https://localmess.github.io

2.      https://www.grc.com/sn/sn-1029-notes.pdf

3.      https://gigazine.net/gsc_news/en/20250604-meta-yandex-tracking/

4.      Researchers & Authors of the localmess github page: Aniketh Girish (PhD student),  Gunes Acar (Assistant Professor),  Narseo Vallina-Rodriguez (Associate Professor), Nipuna Weerasekara (PhD student), Tim Vlummens (PhD student).

Note: Perplexity.AI was used to assist in preparing this report.

Thursday, May 22, 2025

Google's AI Glasses and Implications for Law Enforcement

Google announced a series of partnerships with eyewear companies to develop glasses that incorporate artificial intelligence (AI), marking a significant step in the evolution of wearable technology and its integration into daily life and professional sectors, including law enforcement.

Google's AI Glasses Partnerships

Google has committed up to $150 million to work with Warby Parker on development and sale of AI-powered smart glasses, leveraging the Android XR platform and Gemini AI model[1][2][10]. The initiative extends to partnerships with other eyewear brands, such as Gentle Monster and Kering, and includes a broader collaboration with Samsung to build both the hardware and software foundation for future AR glasses[6][8][10]. The glasses will feature cameras, microphones, and speakers, providing hands-free access to information, live translation, and integration with usersโ€™ smartphones[3][10]. Google states that their approach emphasizes making these devices both functional and suitable for all-day wear, with plans to involve developers in building applications for the platform later this year[6][10].

Implications for Law Enforcement

The integration of AI into smart glasses has implications for law enforcement operations:

  • Real-Time Data Access and Situational Awareness: AI-enabled glasses can provide officers with immediate access to critical information, such as suspect identification, navigation, and threat assessments, directly within their field of view[4][9][11]. This can streamline investigations, support enforcement actions, and enhance officer safety.
  • Facial Recognition and Surveillance: Smart glasses equipped with AI-driven facial recognition can rapidly compare faces in real time against law enforcement databases, aiding in the identification of suspects and missing persons[4][7][11]. Such systems have already been deployed in various jurisdictions, including China, Dubai, and New York, where they have improved the speed and accuracy of suspect recognition[4][7][11].
  • Evidence Collection and Communication: The ability to record and transmit evidence in real time, as well as translate languages or communicate with dispatch and other officers, can improve operational efficiency and support community engagement[4][11].
  • Privacy and Ethical Concerns: The widespread use of AI-powered smart glasses raises privacy issues. Real-time surveillance and facial recognition capabilities may lead to concerns about data security, potential misidentification, and the erosion of privacy in public spaces[4][5][9]. Research and pilot programs have emphasized the need for ethical frameworks, clear protocols, and legislation to govern the use of such technologies in law enforcement, aiming to balance operational benefits with the protection of civil liberties[4][9][11].

Challenges and Considerations

  • Public Trust and Acceptance: The deployment of AI smart glasses by law enforcement requires transparency and public engagement to address concerns about surveillance and misuse[4][9][11].
  • Technical and Operational Readiness: Successful integration depends on reliable hardware, effective AI algorithms, and compatibility with existing law enforcement databases and workflows[4][11].
  • Legislation and Policy: Policymakers should consider establishing clear guidelines for the appropriate use of smart glasses, including data handling, retention, and oversight mechanisms[4][9].

Conclusion

Googleโ€™s partnerships to develop AI-powered smart glasses signal a shift toward more immersive and context-aware wearable technology. For law enforcement, these advancements offer new tools for real-time information access, surveillance, and communication. However, adoption should be accompanied by consideration of privacy, and ethics to ensure lawful use.

Citations:



Friday, May 16, 2025

Understanding False-Positive Hallucinations in AI Research: Implications for Academic Integrity

The emergence of generative artificial intelligence tools transformed academic research. While these tools offer significant assistance in content generation, they also introduce unique challenges-particularly the phenomenon known as "hallucinations." This (AI-assisted) analysis explores the nature, prevalence, and implications of false-positive hallucinations in AI-assisted academic work, along with strategies for educators and students.

Defining False-Positive Hallucinations in AI Research

AI hallucinations occur when large language models (LLMs) generate content that appears factual and authoritative but is actually incorrect, fabricated, or misleading. In the academic context, false-positive hallucinations specifically refer to instances where AI systems confidently present fabricated information as legitimate scholarly content.

Recent research defines hallucinations as occurring "anytime an AI responds incorrectly to a prompt that it should be able to respond correctly to," with outputs presented as facts within an otherwise factual context despite their fundamental flaws[12]. A more nuanced taxonomy categorizes hallucinations based on their degree (mild, moderate, alarming), orientation (factual mirage or silver lining), and specific types such as acronym ambiguity, numeric nuisance, generated golem, virtual voice, geographic erratum, and time wrap[2].

Unlike traditional misinformation, AI hallucinations emerge not from human intent to deceive but from probabilistic processes within the models themselves. This distinction prompted researchers to propose conceptual frameworks that treat AI hallucinations as a distinct form of misinformation requiring specialized understanding and mitigation approaches[10].

The most concerning manifestation in academic contexts is the generation of seemingly legitimate citations and references that don't actually exist. These fabricated references often appear remarkably convincing, using names of authors with previous relevant publications, creating plausible titles, and formatting citations in credible journal styles[3][8].

Types of Academic Hallucinations

Academic hallucinations typically manifest in several forms:

  1. Reference fabrication: Generation of non-existent academic sources
  2. Fact fabrication: Creation of plausible but false statistical data or research findings
  3. Expert fabrication: Attribution of statements to non-existent or misrepresented authorities
  4. Methodological fabrication: Description of studies or experiments that never occurred

Prevalence of False-Positive Hallucinations in Academic AI

The prevalence of AI hallucinations in academic contexts is alarmingly high across various disciplines. In the medical domain, a systematic analysis of ChatGPT responses to medical questions revealed that 69% of references provided were completely fabricated, despite appearing authentic and professionally formatted[8]. The responses themselves demonstrated limited quality with a median score of 60% as rated by medical experts, who identified both major and minor factual errors throughout the evaluated content[8].

Legal research tools incorporating AI technologies fare somewhat better but still exhibit significant hallucination rates. An evaluation of leading legal research AI systems including LexisNexis (Lexis+ AI) and Thomson Reuters (Westlaw AI-Assisted Research and Ask Practical Law AI) found hallucination rates between 17% and 33%, despite vendors' claims of "hallucination-free" legal citations[14].

These findings contradict optimistic marketing claims that retrieval-augmented generation (RAG) and similar techniques have "eliminated" or "avoided" hallucinations in specialized academic AI tools. While RAG does reduce hallucination rates compared to general-purpose chatbots, it clearly has not resolved the issue entirely[14].

The problem extends beyond specialized tools to general-purpose LLMs used directly by students and researchers. A preliminary investigation of ChatGPT revealed its tendency to generate fake peer-reviewed citations that appear legitimate but are entirely fabricated through predictive processes rather than factual knowledge[3].

Notable Examples of False-Positive Hallucinations in Academia

Several documented instances illustrate the real-world impact of AI hallucinations in academic settings:

Medical Research Hallucinations

A particularly concerning example comes from medical research, where ChatGPT provided responses containing both major and minor factual errors when answering medical questions. In one instance, when prompted to provide references for its claims, the AI generated a citation to a seemingly authoritative journal article-complete with real researcher names and a plausible title-that simply did not exist[8]. This fabrication is especially problematic in medical contexts where treatment decisions might be influenced by such misinformation.

Legal Research Fabrications

In the legal domain, AI research tools have been documented producing non-existent case law citations. A preregistered empirical evaluation demonstrated that proprietary legal AI tools would create citations to court cases that never occurred or substantially misrepresent the holdings of real cases[14]. What makes these hallucinations particularly dangerous is their presentation within otherwise factually accurate content, making detection challenging for even experienced legal professionals.

Academic Writing Distortions

In academic writing contexts, AI tools have generated fake journal articles, conferences, and even entire research institutions that sound plausible but don't actually exist. The fabricated references often follow proper citation formats and include realistic publication dates, journal names, and volume numbers-all contributing to their deceptive authenticity[3].

What makes these examples particularly troubling is the confidence with which AI systems present hallucinated information, often seamlessly integrating fabrications with factual content. This blending makes detection challenging without thorough verification of every claimed source.

Should Instructors Require URLs to Source Materials?

Given the prevalence of AI hallucinations, the question arises whether instructors should require students to provide direct URLs to source materials referenced in their submissions.

Arguments Supporting URL Requirements

Requiring URLs to source materials creates an additional verification layer that could significantly reduce the risk of undetected hallucinations. This approach:

  1. Forces students to verify that their sources actually exist before submission
  2. Streamlines the instructor's verification process
  3. Creates a habit of source verification that serves students throughout their academic careers
  4. Aligns with emerging institutional policies on AI use in academic writing[13]

A survey of policies from top universities in English-speaking countries reveals a trend toward requiring transparency in AI use, with explicit guidelines for both academic staff and students[13]. Extending these transparency requirements to include verifiable source links represents a logical evolution of these policies.

Arguments Against Strict URL Requirements

However, mandatory URL requirements present several challenges:

  1. Not all legitimate academic sources have accessible URLs (physical books, paywalled content, etc.)
  2. The focus on URLs might prioritize digital sources over print resources
  3. Implementation could create additional workload for both students and instructors
  4. URL verification alone doesn't guarantee content accuracy

The increasing integration of AI writing tools into academic environments suggests that education should focus on responsible use rather than prohibition[5]. Surveys of EFL university students indicate that these tools are already actively used for enhancing writing quality, with tools like Grammarly and ChatGPT being particularly favored[5][11].

Balanced Recommendation

A balanced approach would involve requiring URLs or other definitive source identifiers (DOIs, ISBN numbers, etc.) for key claims and statistics while focusing on broader verification skills. Instructors should develop clear guidelines for AI use in academic writing that emphasize transparency about AI assistance while teaching responsible verification practices[9][13].

Strategies for Students to Address False-Positive Hallucinations

Students can take several proactive steps to minimize the risk of including hallucinated content in their academic submissions:

1. Verify Every Citation

When using AI tools for research assistance or writing support, students should verify every citation and factual claim generated. This verification process should include:

  • Confirming the existence of cited sources through library databases or academic search engines
  • Cross-checking key facts with multiple reliable sources
  • Being particularly cautious with statistical claims and specific numerical data[3][8].
  • Use in-text citations associated with the references in a References section of academic submissions. 

2. Use AI as a Supplement, Not a Replacement

Research on EFL students' experiences with ChatGPT suggests that the most successful approach involves using AI as a supporting tool rather than the primary source of content. Students benefit most when using AI for:

  • Overcoming uncertainties
  • Clarifying vocabulary
  • Receiving content suggestions that they then critically evaluate and refine

This approach enhances essay quality by allowing students to focus on creative aspects while maintaining authenticity in their work[11].

3. Develop Critical Evaluation Skills

Students should develop and apply critical thinking skills specifically adapted to evaluating AI-generated content. This includes:

  • Questioning implausible or too-perfect statistics
  • Being skeptical of convenient but unsourced claims
  • Looking for internal consistency in arguments and evidence
  • Recognizing that AI tends to provide overly complex suggestions that may lack cultural sensitivity[11]

4. Declare AI Usage Transparently

Transparency regarding AI tool usage is increasingly recognized as an ethical requirement in academic writing. Students should:

  • Follow institutional guidelines for declaring AI assistance
  • Specify which portions of their work were AI-assisted
  • Describe their verification process
  • Maintain a clear distinction between AI-suggested content and their original analysis[9]

A study of transparency in academic research journals found that 37.6% of nursing studies journals now require explicit statements about generative AI use in their authors' guidelines, indicating a growing expectation for transparency[9].

5. Utilize Multiple Tools and Approaches

Students can reduce hallucination risks by:

  • Using multiple AI tools and comparing outputs
  • Employing specialized AI detection tools to identify potentially hallucinated content
  • Combining AI assistance with traditional research methods
  • Seeking human feedback from peers, writing centers, or instructors[11][13]

Conclusion

False-positive hallucinations represent a significant challenge in AI-assisted academic research, with documented prevalence across disciplines including medicine, law, and general academic writing. The ability of AI systems to generate convincing yet entirely fabricated references and facts threatens academic integrity and the reliability of scholarly work.

Rather than prohibiting AI tools, educational institutions should develop comprehensive policies that balance innovation with integrity. These policies should emphasize transparency, verification, and responsible use while equipping students with the critical skills needed to detect and avoid hallucinated content.

For students, the key to successful navigation of this new landscape lies in developing a balanced approach-leveraging AI's capabilities for improving writing quality and efficiency while maintaining rigorous verification practices and critical evaluation of all AI-generated content.

As AI evolves, so too must the approaches to academic integrity. By acknowledging the challenges posed by hallucinations and implementing targeted strategies to address them, academia can harness the benefits of AI while preserving the values of rigor and honesty.

Citations:

 
Note: The author was assisted by Artificial Intelligence in the creation of this document. Efforts were made to verify the source material. 

https://kardasz.blogspot.com/2025/05/understanding-false-positive.html
 
Frank Kardasz, May 16, 2025