Faclon down! Falcon down!
Rrecent update to CrowdStrike Falcon has caused widespread system failures, leading to severe operational disruptions across various sectors globally, including critical infrastructure such as airports and hospitals.
Nature of the Issue
CrowdStrike Falcon, a widely-used Endpoint Detection and Response (EDR) solution, requires frequent updates to its sensors to combat evolving cybersecurity threats. However, the latest update has resulted in numerous instances of the notorious Blue Screen of Death (BSOD) on Windows systems. The problem does not affect macOS or Linux systems. Impacted Sectors and Locations
Airports and Airlines:
- United States: Flights from American Airlines, Delta Airlines, United Airlines, and Allegiant Air have been suspended.
- Hong Kong: Passengers are undergoing manual check-ins.
- Europe: Issues reported by Lufthansa, Air France, KLM, SAS, and Swissguide. Amsterdam-Schiphol airport and Berlin-Brandenburg (BER) airport are experiencing significant disruptions. Prague airport anticipates issues lasting until the afternoon. Aena, managing all Spanish airports, is also affected, causing potential flight delays. At least one airport in Brussels has reported operational problems.
Media:
Sky News: The British news outlet has temporarily ceased broadcasting.Response from CrowdStrike
George Kurtz, CEO of CrowdStrike, addressed the issue on social media platform X, attributing the cause to a faulty update for Windows systems. He assured that CrowdStrike is actively working with affected customers and that the problem has been identified, isolated, and a fix has been deployed.
Conclusion
This incident underscores the critical importance of rigorous testing and validation for software updates, especially for systems integral to public safety and infrastructure. Organizations using CrowdStrike Falcon on Windows should ensure they apply the latest patch and monitor their systems closely to mitigate any ongoing issues.
Sources:
APT41 Infiltrates Networks in Multiple Countries
A sustained cyber campaign by the China-based APT41 hacking group has targeted organizations across various sectors in Italy, Spain, Taiwan, Thailand, Turkey, and the U.K. These sectors include global shipping and logistics, media and entertainment, technology, and automotive. Nature and Methodology of Attacks
Tactics and Tools:
- Web Shells: ANTSWORD and BLUEBEAM
- Custom Droppers: DUSTPAN (StealthVector) and DUSTTRAP
- Public Tools: SQLULDR2 and PINEGROVE
APT41 has used a mix of web shells and custom malware to infiltrate and maintain unauthorized access to networks. They used ANTSWORD and BLUEBEAM web shells to deploy the DUSTPAN dropper, which in turn loaded Cobalt Strike Beacon for command-and-control communication. Following lateral movement within the network, the DUSTTRAP dropper was deployed to execute malicious payloads in memory, minimizing forensic traces.
Data Exfiltration Techniques
- SQLULDR2: Exporting data from Oracle Databases to local text files.
- PINEGROVE: Transmitting large volumes of data via Microsoft OneDrive.
Targeted Sectors and Geographic Distribution
- Shipping and Logistics: Primarily targeted in Europe and the Middle East.
- Media and Entertainment: Focused in Asia.
- Technology and Automotive: Specific organizations within these sectors were also compromised.
Response and Mitigation
Google’s Mandiant reported these intrusions, highlighting the use of non-public malware traditionally reserved for espionage operations. APT41’s blend of state-sponsored espionage and financially motivated activities makes them a unique and formidable threat. Google has remediated the compromised Workspace accounts used in the campaign.
Broader Implications
APT41’s operations reflect a blend of state-sponsored and independent financial motivations. While their espionage efforts have targeted sectors like healthcare, high-tech, and telecommunications, their financially driven activities have predominantly focused on the video game industry.
Conclusion
The widespread and sophisticated nature of APT41’s campaign underscores the need for robust cybersecurity measures and constant vigilance. Organizations, especially those in critical infrastructure and high-value sectors, must ensure they have advanced detection and response strategies in place to mitigate such persistent threats.
Sources:
Apple and NVIDIA loves YouTube!
AI companies have been found using thousands of YouTube videos to train AI models without permission, despite YouTube’s strict rules against such practices. An investigation revealed that major tech companies, including Apple, Nvidia, Anthropic, and Salesforce, extracted subtitles from over 173,000 YouTube videos across more than 48,000 channels.
The Dataset
The dataset, known as YouTube Subtitles, included transcripts from a variety of educational and entertainment channels. High-profile educational sources like Khan Academy, MIT, Harvard, and major media outlets such as the Wall Street Journal, NPR, and the BBC were utilized. Popular late-night TV shows, including “The Late Show With Stephen Colbert,” “Last Week Tonight With John Oliver,” and “Jimmy Kimmel Live,” were also included.
Impact on Creators
Several YouTube megastars were impacted by this data extraction. Notable figures include:
MrBeast (289 million subscribers, two videos taken for training)Marques Brownlee (19 million subscribers, seven videos taken)Jacksepticeye (nearly 31 million subscribers, 377 videos taken)PewDiePie (111 million subscribers, 337 videos taken)Reactions from Creators
David Pakman, host of “The David Pakman Show,” found that nearly 160 of his videos were included in the YouTube Subtitles dataset. Pakman emphasized the need for compensation, highlighting the significant resources invested in content creation. He pointed out that some media companies have secured agreements for paid use of their work in AI training, stressing the fairness of similar compensation for YouTubers.
Dave Wiskus, CEO of Nebula, a streaming service owned by its creators, condemned the practice as “theft” and warned of potential harm to artists. Wiskus expressed concern that generative AI could be used to replace artists, further underscoring the disrespectful nature of using creators’ work without consent. Companies Involved
EleutherAI and The Pile
The dataset was part of a larger compilation called The Pile, released by EleutherAI. The Pile includes not only YouTube Subtitles but also materials from Wikipedia, the European Parliament, and a trove of Enron Corporation employees’ emails. EleutherAI aims to lower barriers to AI development by providing access to advanced AI technologies.
Usage by Major Tech Firms
Apple, Nvidia, Anthropic, and Salesforce have all utilized The Pile for training AI models. Apple used the Pile to train OpenELM, a model released shortly before announcing new AI features for iPhones and MacBooks. Salesforce used the dataset for academic and research purposes, releasing the model publicly, which has since been downloaded at least 86,000 times.
Anthropic, which received a $4 billion investment from Amazon, confirmed using the Pile to train its generative AI assistant Claude. The company defended its use by stating that YouTube’s terms cover direct use of the platform, distinct from using datasets like the Pile.
Legal and Ethical Implications
The unauthorized use of YouTube videos for AI training raises significant legal and ethical questions. Some AI companies, including Meta, OpenAI, and Bloomberg, have faced lawsuits for alleged copyright violations. They argue that their actions constitute fair use, though these cases are still in early litigation stages.
Concerns from Creators
Creators like David Pakman and Dave Wiskus have voiced concerns about the potential for AI-generated content to replace human creators. Pakman described an incident where an AI-generated voice clone of Tucker Carlson read his script, demonstrating the technology’s potential to mislead and cause harm.
Conclusion
The widespread use of YouTube Subtitles by AI companies highlights ongoing tensions between AI development and intellectual property rights. As AI technologies continue to evolve, it is crucial to address these issues through better regulation and fair compensation for content creators. The situation underscores the need for transparency and ethical practices in the rapidly advancing field of artificial intelligence.