The convicted co-author of the highly disruptive Mirai botnet malware strain has been sentenced to 2,500 hours of community service, six months home confinement, and ordered to pay $8.6 million in restitution for repeatedly using Mirai to take down Internet services at Rutgers University, his former alma mater. Paras Jha, a 22-year-old computer whiz from Fanwood, N.J., was studying computer science at Rutgers when he developed Mirai along with two other convicted co-conspirators. According to sentencing memo submitted by government prosecutors, in his freshman and sophomore years at Rutgers Jha used a collection of hacked devices to launch at least four distributed denial-of-service (DDoS) attacks against the university’s networks. Jha told investigators he carried out the attacks not for profit but purely for personal, juvenile reasons: “He reveled in the uproar caused by the first attack, which he launched to delay upper-classmen registration for an advanced computer science class he wanted to take,” the government’s sentencing memo stated. “The second attack was launched to delay his calculus exam. The last two attacks were motivated in part by the publicity and outrage” his previous attacks had generated. Jha would later drop out of Rutgers after struggling academically. In January 2017, almost a year before Jha’s arrest and guilty plea, KrebsOnSecurity identified Jha as the likely co-author of Mirai — which sprang to notoriety after a record-smashing Sept. 2016 attack that sidelined this Web site for nearly four days. That story posited that Jha, operating under the pseudonyms “Ogmemes” and “OgRichardStallman,” gave interviews with a local paper in which he taunted Rutgers and encouraged the school to consider purchasing some kind of DDoS protection service to ward off future attacks. At the time, Jha was president and co-founder of ProTraf Solutions, a DDoS mitigation firm that provided just such a service. The sentence handed down by a federal judge in Trenton today comes on the heels of Jha’s September 2018 sentencing in an Alaska court for his admitted role in creating, wielding and selling access to Mirai — malware which enslaves poorly-secured Internet of Things (IoT) devices like security cameras and digital video recorders for use in extremely powerful attacks capable of knocking most Web sites offline. Prosecutors in the Alaska case said Jha and two co-conspirators did not deserve jail time for their crimes because the trio had cooperated fully with the government and helped investigators with multiple other ongoing cybercrime investigations. The judge in that case agreed, giving Jha and each of his two co-defendants sentences of five years probation, 2,500 hours of community service, and $127,000 in fines. Prosecutors in Alaska argued that Jha had completely turned over a new leaf, noting that he was once again attending school and had even landed a job at an unnamed cybersecurity company. Sending him to prison, they reasoned, would only disrupt a remarkable transformation for a gifted young man. However, the punishment meted out today for the Rutgers attack requires Jha to remain sequestered in his parent’s New Jersey home for the next six months — with excursions allowed only for medical reasons. The sentence also piles on an additional 2,500 hours of community service. Further, Jha will be on the hook to pay $8.6 million in restitution — the amount Rutgers estimated it cost the university to respond to Jha’s attacks. Jha could not be immediately reached for comment. But his attorney Robert Stahl told KrebsOnSecurity today’s decision by the New Jersey court was “thoughtful and reasoned.” “The judge noted that Paras’ cooperation has been much more extensive and valuable than any he’s ever seen while on the bench,” Stahl said. “He won’t be going to back to school right now or to his job.” It is likely that Jha’s creation will outlive his probation and community service. After the Sept. 2016 attack on KrebsOnSecurity and several other targets, Jha and his cohorts released the source code for Mirai in a bid to throw investigators off their trail. That action has since spawned legions of copycat Mirai botnets and Mirai malware variants that persist to this day. Update, Oct. 27, 9;30 am. ET: A previous version of this story incorrectly stated that the courthouse in Friday’s sentencing was located in Newark. It was in Trenton. The above copy has been changed. from https://krebsonsecurity.com/2018/10/mirai-co-author-gets-6-months-confinement-8-6m-in-fines-for-rutgers-attacks/
0 Comments
The fraudsters behind the often laughable Nigerian prince email scams have long since branched out into far more serious and lucrative forms of fraud, including account takeovers, phishing, dating scams, and malware deployment. Combating such a multifarious menace can seem daunting, and it calls for concerted efforts to tackle the problem from many different angles. This post examines the work of a large, private group of volunteers dedicated to doing just that. According to the most recent statistics from the FBI‘s Internet Crime Complaint Center, the most costly form of cybercrime stems from a complex type of fraud known as the “Business Email Compromise” or BEC scam. A typical BEC scam involves phony e-mails in which the attacker spoofs a message from an executive at a company or a real estate escrow firm and tricks someone into wiring funds to the fraudsters. The FBI says BEC scams netted thieves more than $12 billion between 2013 and 2018. However, BEC scams succeed thanks to help from a variety of seemingly unrelated types of online fraud — most especially dating scams. I recently interviewed Ronnie Tokazowski, a reverse engineer at New York City-based security firm Flashpoint and something of an expert on BEC fraud. Tokazowski is an expert on the subject thanks to his founding in 2015 of the BEC Mailing List, a private discussion group comprising more than 530 experts from a cross section of security firms, Internet and email providers and law enforcement agents that is dedicated to making life more difficult for scammers who perpetrate these schemes. Earlier this month, Tokazowski was given the JD Falk award by the Messaging Malware Mobile Anti-Abuse Working Group (M3AAWG) for his efforts in building and growing the BEC List (loyal readers here may recognize the M3AAWG name: KrebsOnSecurity received a different award from M3AAWG in 2014). M3AAWG presents its JD Falk Award annually to recognize “a project that helps protect the internet and embodies a spirit of volunteerism and community building.” Here are some snippets from our conversation: Brian Krebs (BK): You were given the award by M3AAWG in part for your role in starting the BEC mailing list, but more importantly for the list’s subsequent growth and impact on the BEC problem as a whole. Talk about why and how that got started and evolved. Ronnie Tokazowski (RT): The why is that there’s a lot of money being lost to this type of fraud. If you just look at the financial losses across cybercrime — including ransomware, banking trojans and everything else — BEC is number one. Something like 63 percent of fraud losses reported to the FBI are related to it. When we started the list around Christmas of 2015, it was just myself and one FBI agent. When we had our first conference in May 2016, there were about 20 people attending to try to figure out how to tackle all of the individual pieces of this type of fraud. Fast forward to today, and the group now has about 530 people, we’ve now held three conferences, and collectively the group has directly or indirectly contributed to over 100 arrests for people involved in BEC scams. BK: What did you discover as the group began to coalesce? RT: As we started getting more and more people involved, we realized BEC was much broader than just phishing emails. These guys actually maintain vast networks of money mules, technical and logistical infrastructure, as well as tons of romance scam accounts that they have to maintain over time. BK: I want to ask you more about the romance scam aspect of BEC fraud in just a moment, because that’s one of the most fascinating cogs in this enormous crime machine. But I’m curious about what short-term goals the group set in identifying the individuals behind these extremely lucrative scams? RT: We wanted to start a collaboration group to fight BEC, and really a big part of that involved just trying to social engineer the actors and get them to click on links that we could use to find out more about them and where they’re coming from. BK: And where are they coming from? When I’ve written about BEC scams previously and found most of them trace back to criminals in Nigeria, people often respond that this is just a stereotype, prejudice, or over-generalization. What’s been your experience? RT: Right. A lot of people think Nigeria is just a scapegoat. However, when we trace back phone numbers, IP addresses and language usage, the vast majority of that is coming out of Nigeria. BK: Why do you think so much of this type of fraud comes out of Nigeria? RT: Well, corruption is a big problem there, but also there’s this subculture where doing this type of wire fraud isn’t seen as malicious exactly. There’s not only a lot of poverty there, but also a very strong subculture there to support this type of fraud, and a lot of times these actors justify their actions by seeing it as attacking organizations, and not the people behind those organizations. I think also because they rationalize that individuals who are victimized will ultimately get their money back. But of course in a lot of cases, they don’t. BK: Is that why so many of these Nigerian prince, romance and BEC scams aren’t exactly worded in proper English and tend to read kind of funny sometimes? RT: While a lot of the scammers are typically from Nigeria, the people doing the actual spamming side typically come from a mix of other countries in the region, including Algeria, Morocco and Tunisia. And it’s interesting looking at these scams from a language perspective, because you have them writing in English that’s also influenced by [people who speak] French and Arabic. So that explains why the emails often are written in poor English whereas to them it seems normal. BK: Let’s talk about the romance scams. How does online dating fraud fit into the BEC scam? RT: [The fraudsters] will impersonate both men and women who are single, divorced or widowed. But their primary target is female widows who are active on social media sites. BK: And in most of these cases the object of the phony affection is what? To create a relationship so that the other person feels comfortable accepting money or moving money on behalf of their significant other, right? RT: Yes, they end up being recruited as money mules. Or maybe they’re groomed in order to set up a bank account for their lovers. We’ve dealt with multiple cases where we see a money mule account coming through and then look that person up on social media and quickly able to see they were friends with a clearly fake profile or a profile that we’ve already identified as a BEC scammer. So there is a very strong tie between these BEC scams and romance scams. BK: Are all of the romance scam victims truly unwitting, do you think? RT: With the mules who don’t one hundred percent know what they’re doing, they might be [susceptible to the suggestion] hey, could you open this account for me. The second type of mule can be on the payroll [of the scam organization] and getting a cut of the money for assisting in the wiring of money [to the fraudsters’ accounts.] BK: I saw in one of your tweets you mentioned personally interacting with some of these BEC scammers. RT: Yeah, a few weeks ago I was running a romance scammer who reached out and added me as a friend on Facebook. The story they were telling was that this person was a single mom with a kid aged 43 looking for companionship. By day 4 [of back and forth conversations] they were asking me to send them iTunes gift cards. BK: Hah! So what happened then? RT: I went to my local grocery store, which was all too willing to help. When you’re trying to catch scammers, it doesn’t cost the store a dime to give you non-activated iTunes gift cards. BK: That sounds like fun. Beyond scamming the scammers to learn more about their operations and who they are, can you talk about what you and other members of the BEC working group have been trying to accomplish to strategically fight this kind of fraud? RT: What we found was with BEC fraud it’s really hard to find ownership, because there’s no one entity that’s responsible for shutting it down. There are a lot of moving parts to the BEC scam, including lots of romance scam social media accounts, multiple email providers, and bank accounts tied to money mules that get pulled into these scams. The feds get a lot of flack for not making arrests, the private sector gets criticized for not doing more, and a lot of people are placing the blame on social media for not doing more. But the truth is that in order to address BEC as a whole we all have to work together on that. It’s like the old saying: How do you eat an elephant? One bite at a time. BK: So the primary goal of the group was to figure out ways to get better and faster at shutting down the resources used by these fraudsters? RT: Correct. The main [focus] we set when starting this group was the sheer length of time it takes for law enforcement to put together a subpoena, which can take up to 30 days to process and get the requested information back that allows you to see who was logged into what account, when and from where. At the same time, these bad actors can stand up a bunch of new accounts each day. So the question was how do we figure out a good way to start whacking the email accounts and moving much faster than the subpoena process allows. The overall goal of the BEC group has been to put everyone in the same room, [including] social media and email providers and security companies, so that we can attack this problem from all sides at once. BK: I see. In other words, making it easier for companies that have a role to play to be proactive in shutting down resources that are used by the BEC scammers. RT: Exactly. And so far we have helped to close hundreds of accounts, helped contribute directly or indirectly to dozens of arrests, and prevented millions of dollars in fraud. BK: At the same time, this work must feel like a somewhat Sisyphean task. I mean, it costs the bad guys almost nothing to set up new accounts, and there seem to be no limit to the number of people participating in various aspects of these scams. RT: That’s true, and even with 530 people from dozens of companies and organizations in this BEC working group now it sometimes doesn’t feel like we’re making enough of an impact. But the way I look at it is for each account we get taken down, that’s someone’s father or mother who’s not being scammed and losing their inheritance to a Nigerian scammer. The one thing I’m proud of is we’ve now operated for three years and have had very few snafus. It’s been very cool to watch the amount of trust that organizations have put into this group and to be along for the ride there in seeing so many competitors actually working together. ———————————————————————————————————————————————– Anyone interested in helping in the fight against BEC fraud and related scams should check out the Web site 419eater.com, which includes a ton of helpful resources for learning more. My favorite section of the site is the Letters Archive, which features often hilarious email threads between the scammers and “scam baiters” — volunteers dedicated to stringing the scammers along and exposing them publicly. Related reading: Business Email Compromise: Putting a Wisconsin Case Under the Microscope Spy Service Exposes Nigerian Yahoo Boys Yahoo Boys Have 419 Facebook Friends Deleted Facebook Cybercrime Groups Had 300,000 Members Where Did That Scammer Get Your Email Address? from https://krebsonsecurity.com/2018/10/how-do-you-fight-a-12b-fraud-problem-one-scammer-at-a-time/ A powerful, easy-to-use password stealing program known as Agent Tesla has been infecting computers since 2014, but recently this malware strain has seen a surge in popularity — attracting more than 6,300 customers who pay monthly fees to license the software. Although Agent Tesla includes a multitude of features designed to help it remain undetected on host computers, the malware’s apparent creator seems to have done little to hide his real-life identity. The proprietors of Agent Tesla market their product at agenttesla-dot-com, selling access to the software in monthly licenses paid for via bitcoin, for prices ranging from $15 to $69 per month depending on the desired features. The Agent Tesla Web site emphasizes that the software is strictly “for monitoring your personel [sic] computer.” The site’s “about” page states that Agent Tesla “is not a malware. Please, don’t use for computers which is not access permission.” To backstop this disclaimer, the site warns that any users caught doing otherwise will have their software licenses revoked and subscriptions canceled. At the same time, the Agent Tesla Web site and its 24/7 technical support channel (offered via Discord) is replete with instances of support personnel instructing users on ways to evade antivirus software detection, use software vulnerabilities to deploy the product, and secretly bundle the program inside of other file types, such as images, text, audio and even Microsoft Office files. In August 2018, computer security firm LastLine said it witnessed a 100 percent increase in Agent Tesla instances detected in the wild over just a three month period. “Acting as a fully-functional information stealer, it is capable of extracting credentials from different browsers, mail, and FTP clients,” LastLine wrote. “It logs keys and clipboards data, captures screen and video, and performs form-grabbing (Instagram, Twitter, Gmail, Facebook, etc.) attacks.” I CAN HAZ TESLAThe earliest versions of Agent Tesla were made available for free via a Turkish-language WordPress site that oddly enough remains online (agenttesla.wordpress-dot-com), although its home page now instructs users to visit the current AgentTesla-dot-com domain. Not long after that WordPress site was erected, its author(s) began charging for the software, accepting payments via a variety of means, including PayPal, Bitcoin and even wire transfer to several bank accounts in Turkey. Historic WHOIS Web site registration records maintained by Domaintools.com show that the current domain for the software — agenttesla-dot-com — was registered in 2014 to a young man from Antalya, Turkey named Mustafa can Ozaydin, and to the email address [email protected]. Sometime in mid-2016 the site’s registration records were hidden behind WHOIS privacy services [full disclosure: Domaintools is a previous advertiser on KrebsOnSecurity]. That Gmail address is tied to a Youtube.com account for a Turkish individual by the same name who has uploaded exactly three videos over the past four years. In one of them, uploaded in October 2017 and titled “web panel,” Mr. can Ozaydin demonstrates how to configure a Web site. At around 3:45 in the video, we can see the purpose of this demonstration is to show people one way to install an Agent Tesla control panel to keep track of systems infected with the malware. Incidentally, the administrator of the 24/7 live support channel for Agent Tesla users at one point instructed customers to view this same video if they were having trouble figuring out how to deploy the control panel. The profile picture shown in that Youtube account is remarkably similar to the one displayed on the Twitter account “MCanOZAYDIN.” This Twitter profile makes no mention of Agent Tesla, but it does state that Mustafa can Ozaydin is an “information technology specialist” in Antalya, Turkey. That Twitter profile also shows up on a Facebook account for a Mustafa can Ozaydin from Turkey. A LinkedIn profile for a person by the same name from Antalya, Turkey states that Mr. can Ozaydin is currently a “systems support expert” for Memorial Healthcare Group, a hospital in Istanbul. KrebsOnSecurity first reached out for comment to all of these accounts back in August 2018, but received no reply. Repeated attempts to reach those accounts this past week also elicited no response. MALWARE OR BENIGN REMOTE ACCESS TOOL?Many readers here have taken the view that tools like Agent Tesla are functionally no different from more mainstream “remote administration tools” like GoToMyPC, VNC, or LogMeIn, products that are frequently used by tech support personnel to remotely manage one or more systems to which those personnel legitimately have access rights. U.S. federal prosecutors, meanwhile, have adopted a different position. Namely, when someone selling a remote administration tool begins instructing customers on how to install the product in ways that are arguably deceptive (such as through the use of software exploits, spam or disguising the tool as another program), the proprietor has crossed a legal line and can be criminally prosecuted under computer misuse laws. In previous such cases, the prosecution’s argument has hinged on the procurement of chat logs showing that the software seller knew full well his product was being used to infect computers without the users’ knowledge or permission. Last week, a Lexington, Ky. man was sentenced to 30 months in federal prison after pleading guilty to conspiracy to unlawfully access computers in connection with his admitted authorship of a remote access tool called LuminosityLink. Colton Grubbs, 21, admitted to selling his software for $39.99 apiece to more than 6,000 customers in at least 78 different countries. LuminosityLink allowed his customers to record the keys that victims pressed on their keyboards, spy on victims using their computers’ cameras and microphones, view and download the computers’ files, and steal names and passwords used to access websites. “Directly and indirectly, Grubbs offered assistance to his customers on how to use LuminosityLink for unauthorized computer intrusions through posts and group chats on websites such as HackForums,” the Justice Department wrote in a press release about the sentencing. Grubbs must also forfeit the proceeds of his crimes, including 114 bitcoin, presently valued at more than $725,000. Around the time that Grubbs stopped responding to support requests on Hackforums, federal prosecutors were securing a guilty plea against Taylor Huddleston, a then 27-year-old programmer from Arkansas who sold the “NanoCore RAT.” Like Grubbs, Huddleston initially pleaded not guilty to computer intrusion charges, arguing that he wasn’t responsible for how customers used his products. That is, until prosecutors presented Skype logs showing that Huddleston routinely helped buyers work out how to use the tools to secretly compromise remote computers. Huddleston is currently serving a 33-month sentence after pleading guilty to selling the NanoCore RAT. from https://krebsonsecurity.com/2018/10/who-is-agent-tesla/ Legal industry more likely to break phones than those working in construction or hospitalityThe boffins here at iMend.com have analysed the thousands of broken or damaged business-owned mobile phones we repair each year to uncover which industries damage the highest number of devices, and what the most common repairs are. We found that law firms break the highest number of devices, more than even construction workers. We also discovered that an office desk contains more dangers for a smartphone than a construction site! Top 5 industries which break the most work phones:-
As a result, the two most common repairs we make on business devices are screen replacements on dropped devices and battery replacements for older phones. The team also deals with devices that have been run over by a vehicle, dropped in the toilet or had coffee or tea spilt over them showing the range of potential hazards which need to be avoided. Due to the high cost of replacing damaged devices with new phones, businesses can save thousands of pounds per year by repairing broken work phones. The average work phone needs a new screen or rear housing after as little as 3 to 6 months of use. An iPhone 6S’s screen can be replaced 8 times and still cost less than buying a new handset outright. The Samsung S7’s screen can be replaced 3.5 times for the same cost as a new device. Sarah McConomy, Sales & Marketing Director at iMend.com, added, “The office environment is a dangerous place for a mobile phone, and especially with liquid damage, employee efforts to fix the phone can damage the phone more! Putting a waterlogged phone in rice can cause a rice grain to get lodged in the power socket or headphone jack, and it doesn’t help the phone dry out. If a phone has had a milky tea or coffee spilt over it, drying it out will leave milk residue inside which can still go off! Environmental ImpactBusinesses are increasingly aware of the environmental impact they can have and keeping mobile devices running for as long as possible allows businesses to reduce their electronics waste while also saving thousands of pounds a year. Apple provides software updates for devices up to 4 years old, and by replacing the battery when it starts to run down businesses can dramatically extend the useful life they can extract from their mobile assets, which is also better for the environment!
Do you work in one of these professions? Does this ring true to you? Maybe you work in another industry you think is even worse for breaking phones?! We would love to hear your stories! The post Legal professionals are the most likely to break their work phones appeared first on iMend Blog. from https://www.imend.com/blog/most-likely-to-break-their-work-phones/ Earlier this month I spoke at a cybersecurity conference in Albany, N.Y. alongside Tony Sager, senior vice president and chief evangelist at the Center for Internet Security and a former bug hunter at the U.S. National Security Agency. We talked at length about many issues, including supply chain security, and I asked Sager whether he’d heard anything about rumors that Supermicro — a high tech firm in San Jose, Calif. — had allegedly inserted hardware backdoors in technology sold to a number of American companies. The event Sager and I spoke at was prior to the publication of Bloomberg Businessweek‘s controversial story alleging that Supermicro had duped almost 30 companies into buying backdoored hardware. Sager said he hadn’t heard anything about Supermicro specifically, but we chatted at length about the challenges of policing the technology supply chain. Below are some excerpts from our conversation. I learned quite bit, and I hope you will, too. Brian Krebs (BK): Do you think Uncle Sam spends enough time focusing on the supply chain security problem? It seems like a pretty big threat, but also one that is really hard to counter. Tony Sager (TS): The federal government has been worrying about this kind of problem for decades. In the 70s and 80s, the government was more dominant in the technology industry and didn’t have this massive internationalization of the technology supply chain. But even then there were people who saw where this was all going, and there were some pretty big government programs to look into it. BK: Right, the Trusted Foundry program I guess is a good example. TS: Exactly. That was an attempt to help support a U.S.-based technology industry so that we had an indigenous place to work with, and where we have only cleared people and total control over the processes and parts. BK: Why do you think more companies aren’t insisting on producing stuff through code and hardware foundries here in the U.S.? TS: Like a lot of things in security, the economics always win. And eventually the cost differential for offshoring parts and labor overwhelmed attempts at managing that challenge. BK: But certainly there are some areas of computer hardware and network design where you absolutely must have far greater integrity assurance? TS: Right, and this is how they approach things at Sandia National Laboratories [one of three national nuclear security research and development laboratories]. One of the things they’ve looked at is this whole business of whether someone might sneak something into the design of a nuclear weapon. The basic design principle has been to assume that one person in the process may have been subverted somehow, and the whole design philosophy is built around making sure that no one person gets to sign off on what goes into a particular process, and that there is never unobserved control over any one aspect of the system. So, there are a lot of technical and procedural controls there. But the bottom line is that doing this is really much harder [for non-nuclear electronic components] because of all the offshoring now of electronic parts, as well as the software that runs on top of that hardware. BK: So is the government basically only interested in supply chain security so long as it affects stuff they want to buy and use? TS: The government still has regular meetings on supply chain risk management, but there are no easy answers to this problem. The technical ability to detect something wrong has been outpaced by the ability to do something about it. BK: Wait…what? TS: Suppose a nation state dominates a piece of technology and in theory could plant something inside of it. The attacker in this case has a risk model, too. Yes, he could put something in the circuitry or design, but his risk of exposure also goes up. Could I as an attacker control components that go into certain designs or products? Sure, but it’s often not very clear what the target is for that product, or how you will guarantee it gets used by your target. And there are still a limited set of bad guys who can pull that stuff off. In the past, it’s been much more lucrative for the attacker to attack the supply chain on the distribution side, to go after targeted machines in targeted markets to lessen the exposure of this activity. BK: So targeting your attack becomes problematic if you’re not really limiting the scope of targets that get hit with compromised hardware. TS: Yes, you can put something into everything, but all of a sudden you have this massive big data collection problem on the back end where you as the attacker have created a different kind of analysis problem. Of course, some nations have more capability than others to sift through huge amounts of data they’re collecting. BK: Can you talk about some of the things the government has typically done to figure out whether a given technology supplier might be trying to slip in a few compromised devices among an order of many? TS: There’s this concept of the “blind buy,” where if you think the threat vector is someone gets into my supply chain and subverts the security of individual machines or groups of machines, the government figures out a way to purchase specific systems so that no one can target them. In other words, the seller doesn’t know it’s the government who’s buying it. This is a pretty standard technique to get past this, but it’s an ongoing cat and mouse game to be sure. BK: I know you said before this interview that you weren’t prepared to comment on the specific claims in the recent Bloomberg article, but it does seem that supply chain attacks targeting cloud providers could be very attractive for an attacker. Can you talk about how the big cloud providers could mitigate the threat of incorporating factory-compromised hardware into their operations? TS: It’s certainly a natural place to attack, but it’s also a complicated place to attack — particularly the very nature of the cloud, which is many tenants on one machine. If you’re attacking a target with on-premise technology, that’s pretty simple. But the purpose of the cloud is to abstract machines and make more efficient use of the same resources, so that there could be many users on a given machine. So how do you target that in a supply chain attack? BK: Is there anything about the way these cloud-based companies operate….maybe just sheer scale…that makes them perhaps uniquely more resilient to supply chain attacks vis-a-vis companies in other industries? TS: That’s a great question. The counter positive trend is that in order to get the kind of speed and scale that the Googles and Amazons and Microsofts of the world want and need, these companies are far less inclined now to just take off-the-shelf hardware and they’re actually now more inclined to build their own. BK: Can you give some examples? TS: There’s a fair amount of discussion among these cloud providers about commonalities — what parts of design could they cooperate on so there’s a marketplace for all of them to draw upon. And so we’re starting to see a real shift from off-the-shelf components to things that the service provider is either designing or pretty closely involved in the design, and so they can also build in security controls for that hardware. Now, if you’re counting on people to exactly implement designs, you have a different problem. But these are really complex technologies, so it’s non-trivial to insert backdoors. It gets harder and harder to hide those kinds of things. BK: That’s interesting, given how much each of us have tied up in various cloud platforms. Are there other examples of how the cloud providers can make it harder for attackers who might seek to subvert their services through supply chain shenanigans? TS: One factor is they’re rolling this technology out fairly regularly, and on top of that the shelf life of technology for these cloud providers is now a very small number of years. They all want faster, more efficient, powerful hardware, and a dynamic environment is much harder to attack. This actually turns out to be a very expensive problem for the attacker because it might have taken them a year to get that foothold, but in a lot of cases the short shelf life of this technology [with the cloud providers] is really raising the costs for the attackers. When I looked at what Amazon and Google and Microsoft are pushing for it’s really a lot of horsepower going into the architecture and designs that support that service model, including the building in of more and more security right up front. Yes, they’re still making lots of use of non-U.S. made parts, but they’re really aware of that when they do. That doesn’t mean these kinds of supply chain attacks are impossible to pull off, but by the same token they don’t get easier with time. BK: It seems to me that the majority of the government’s efforts to help secure the tech supply chain come in the form of looking for counterfeit products that might somehow wind up in tanks and ships and planes and cause problems there — as opposed to using that microscope to look at commercial technology. Do you think that’s accurate? TS: I think that’s a fair characterization. It’s a logistical issue. This problem of counterfeits is a related problem. Transparency is one general design philosophy. Another is accountability and traceability back to a source. There’s this buzzphrase that if you can’t build in security then build in accountability. Basically the notion there was you often can’t build in the best or perfect security, but if you can build in accountability and traceability, that’s a pretty powerful deterrent as well as a necessary aid. BK: For example….? TS: Well, there’s this emphasis on high quality and unchangeable logging. If you can build strong accountability that if something goes wrong I can trace it back to who caused that, I can trace it back far enough to make the problem more technically difficult for the attacker. Once I know I can trace back the construction of a computer board to a certain place, you’ve built a different kind of security challenge for the attacker. So the notion there is while you may not be able to prevent every attack, this causes the attacker different kinds of difficulties, which is good news for the defense. BK: So is supply chain security more of a physical security or cybersecurity problem? TS: We like to think of this as we’re fighting in cyber all the time, but often that’s not true. If you can force attackers to subvert your supply chain, they you first off take away the mid-level criminal elements and you force the attackers to do things that are outside the cyber domain, such as set up front companies, bribe humans, etc. And in those domains — particularly the human dimension — we have other mechanisms that are detectors of activity there. BK: What role does network monitoring play here? I’m hearing a lot right now from tech experts who say organizations should be able to detect supply chain compromises because at some point they should be able to see truckloads of data leaving their networks if they’re doing network monitoring right. What do you think about the role of effective network monitoring in fighting potential supply chain attacks. TS: I’m not so optimistic about that. It’s too easy to hide. Monitoring is about finding anomalies, either in the volume or type of traffic you’d expect to see. It’s a hard problem category. For the US government, with perimeter monitoring there’s always a trade off in the ability to monitor traffic and the natural movement of the entire Internet towards encryption by default. So a lot of things we don’t get to touch because of tunneling and encryption, and the Department of Defense in particular has really struggled with this. Now obviously what you can do is man-in-the-middle traffic with proxies and inspect everything there, and the perimeter of the network is ideally where you’d like to do that, but the speed and volume of the traffic is often just too great. BK: Isn’t the government already doing this with the “trusted internet connections” or Einstein program, where they consolidate all this traffic at the gateways and try to inspect what’s going in and out? TS: Yes, so they’re creating a highest volume, highest speed problem. To monitor that and to not interrupt traffic you have to have bleeding edge technology to do that, and then handle a ton of it which is already encrypted. If you’re going to try to proxy that, break it out, do the inspection and then re-encrypt the data, a lot of times that’s hard to keep up with technically and speed-wise. BK: Does that mean it’s a waste of time to do this monitoring at the perimeter? TS: No. The initial foothold by the attacker could have easily been via a legitimate tunnel and someone took over an account inside the enterprise. The real meaning of a particular stream of packets coming through the perimeter you may not know until that thing gets through and executes. So you can’t solve every problem at the perimeter. Some things only because obvious and make sense to catch them when they open up at the desktop. BK: Do you see any parallels between the challenges of securing the supply chain and the challenges of getting companies to secure Internet of Things (IoT) devices so that they don’t continue to become a national security threat for just about any critical infrastructure, such as with DDoS attacks like we’ve seen over the past few years? TS: Absolutely, and again the economics of security are so compelling. With IoT we have the cheapest possible parts, devices with a relatively short life span and it’s interesting to hear people talking about regulation around IoT. But a lot of the discussion I’ve heard recently does not revolve around top-down solutions but more like how do we learn from places like the Food and Drug Administration about certification of medical devices. In other words, are there known characteristics that we would like to see these devices put through before they become in some generic sense safe. BK: How much of addressing the IoT and supply chain problems is about being able to look at the code that powers the hardware and finding the vulnerabilities there? Where does accountability come in? TS: I used to look at other peoples’ software for a living and find zero-day bugs. What I realized was that our ability to find things as human beings with limited technology was never going to solve the problem. The deterrent effect that people believed someone was inspecting their software usually got more positive results than the actual looking. If they were going to make a mistake – deliberately or otherwise — they would have to work hard at it and if there was some method of transparency, us finding the one or two and making a big deal of it when we did was often enough of a deterrent. BK: Sounds like an approach that would work well to help us feel better about the security and code inside of these election machines that have become the subject of so much intense scrutiny of late. TS: We’re definitely going through this now in thinking about the election devices. We’re kind of going through this classic argument where hackers are carrying the noble flag of truth and vendors are hunkering down on liability. So some of the vendors seem willing to do something different, but at the same time they’re kind of trapped now by the good intentions of open vulnerability community. The question is, how do we bring some level of transparency to the process, but probably short of vendors exposing their trade secrets and the code to the world? What is it that they can demonstrate in terms of cost effectiveness of development practices to scrub out some of the problems before they get out there. This is important, because elections need one outcome: Public confidence in the outcome. And of course, one way to do that is through greater transparency. BK: What, if anything, are the takeaways for the average user here? With the proliferation of IoT devices in consumer homes, is there any hope that we’ll see more tools that help people gain more control over how these systems are behaving on the local network? TS: Most of [the supply chain problem] is outside the individual’s ability to do anything about, and beyond ability of small businesses to grapple with this. It’s in fact outside of the autonomy of the average company to figure it out. We do need more national focus on the problem. It’s now almost impossible to for consumers to buy electronics stuff that isn’t Internet-connected. The chipsets are so cheap and the ability for every device to have its own Wi-Fi chip built in means that [manufacturers] are adding them whether it makes sense to or not. I think we’ll see more security coming into the marketplace to manage devices. So for example you might define rules that say appliances can talk to the manufacturer only. We’re going to see more easy-to-use tools available to consumers to help manage all these devices. We’re starting to see the fight for dominance in this space already at the home gateway and network management level. As these devices get more numerous and complicated, there will be more consumer oriented ways to manage them. Some of the broadband providers already offer services that will tell what devices are operating in your home and let users control when those various devices are allowed to talk to the Internet. Since Bloomberg’s story broke, The U.S. Department of Homeland Security and the National Cyber Security Centre, a unit of Britain’s eavesdropping agency, GCHQ, both came out with statements saying they had no reason to doubt vehement denials by Amazon and Apple that they were affected by any incidents involving Supermicro’s supply chain security. Apple also penned a strongly-worded letter to lawmakers denying claims in the story. Meanwhile, Bloomberg reporters published a follow-up story citing new, on-the-record evidence to back up claims made in their original story. from https://krebsonsecurity.com/2018/10/supply-chain-security-101-an-experts-view/ Microsoft this week released software updates to fix roughly 50 security problems with various versions of its Windows operating system and related software, including one flaw that is already being exploited and another for which exploit code is publicly available. The zero-day bug — CVE-2018-8453 — affects Windows versions 7, 8.1, 10 and Server 2008, 2012, 2016 and 2019. According to security firm Ivanti, an attacker first needs to log into the operating system, but then can exploit this vulnerability to gain administrator privileges. Another vulnerability patched on Tuesday — CVE-2018-8423 — was publicly disclosed last month along with sample exploit code. This flaw involves a component shipped on all Windows machines and used by a number of programs, and could be exploited by getting a user to open a specially-crafted file — such as a booby-trapped Microsoft Office document. KrebsOnSecurity has frequently suggested that Windows users wait a day or two after Microsoft releases monthly security updates before installing the fixes, with the rational that occasionally buggy patches can cause serious headaches for users who install them before all the kinks are worked out. This month, Microsoft briefly paused updates for Windows 10 users after many users reported losing all of the files in their “My Documents” folder. The worst part? Rolling back to previous saved versions of Windows prior to the update did not restore the files. Microsoft appears to have since fixed the issue, but these kinds of incidents illustrate the value of not only waiting a day or two to install updates but also manually backing up your data prior to installing patches (i.e., not just simply counting on Microsoft’s System Restore feature to save the day should things go haywire). Mercifully, Adobe has spared us an update this month for its Flash Player software, although it has shipped a non-security update for Flash. For more on this month’s Patch Tuesday batch, check out posts from Ivanti and Qualys. As always, if you experience any issues installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips. My apologies for the tardiness of this post; I have been traveling in Australia this past week with only sporadic access to the Internet. from https://krebsonsecurity.com/2018/10/patch-tuesday-october-2018-edition/ What do we do with a company that regularly pumps metric tons of virtual toxic sludge onto the Internet and yet refuses to clean up their act? If ever there were a technology giant that deserved to be named and shamed for polluting the Web, it is Xiongmai — a Chinese maker of electronic parts that power a huge percentage of cheap digital video recorders (DVRs) and Internet-connected security cameras. In late 2016, the world witnessed the sheer disruptive power of Mirai, a powerful botnet strain fueled by Internet of Things (IoT) devices like DVRs and IP cameras that were put online with factory-default passwords and other poor security settings. Security experts soon discovered that a majority of Mirai-infected devices were chiefly composed of components made by Xiongmai (a.k.a. Hangzhou Xiongmai Technology Co., Ltd.) and a handful of other Chinese tech firms that seemed to have a history of placing product market share and price above security. Since then, two of those firms — Huawei and Dahua — have taken steps to increase the security of their IoT products out-of-the-box. But Xiongmai — despite repeated warnings from researchers about deep-seated vulnerabilities in its hardware — has continued to ignore such warnings and to ship massively insecure hardware and software for use in products that are white-labeled and sold by more than 100 third-party vendors. On Tuesday, Austrian security firm SEC Consult released the results of extensive research into multiple, lingering and serious security holes in Xiongmai’s hardware. SEC Consult said it began the process of working with Xiongmai on these problems back in March 2018, but that it finally published its research after it became clear that Xiongmai wasn’t going to address any of the problems. “Although Xiongmai had seven months notice, they have not fixed any of the issues,” the researchers wrote in a blog post published today. “The conversation with them over the past months has shown that security is just not a priority to them at all.” Xiongmai did not respond to requests for comment. PROBLEM TO PROBLEMA core part of the problem is the peer-to-peer (P2P) communications component called “XMEye” that ships with all Xiongmai devices and automatically connects them to a cloud network run by Xiongmai. The P2P feature is designed so that consumers can access their DVRs or security cameras remotely anywhere in the world and without having to configure anything. To access a Xiongmai device via the P2P network, one must know the Unique ID (UID) assigned to each device. The UID is essentially derived in an easily reproducible way using the device’s built-in MAC address (a 16-character string of numbers and letters, such as 68ab8124db83c8db). Electronics firms are assigned ranges of MAC address that they may use, but SEC Consult discovered that Xiongmai for some reason actually uses MAC address ranges assigned to a number of other companies, including tech giant Cisco Systems, German printing press maker Koenig & Bauer AG, and Swiss chemical analysis firm Metrohm AG. SEC Consult learned that it was trivial to find Xiongmai devices simply by computing all possible ranges of UIDs for each range of MAC addresses, and then scanning Xiongmai’s public cloud for XMEye-enabled devices. Based on scanning just two percent of the available ranges, SEC Consult conservatively estimates there are around 9 million Xiongmai P2P devices online. [For the record, KrebsOnSecurity has long advised buyers of IoT devices to avoid those advertise P2P capabilities for just this reason. The Xiongmai debacle is yet another example of why this remains solid advice]. BLANK TO BANKWhile one still needs to provide a username and password to remotely access XMEye devices via this method, SEC Consult notes that the default password of the all-powerful administrative user (username “admin”) is blank (i.e, no password). The admin account can be used to do anything to the device, such as changing its settings or uploading software — including malware like Mirai. And because users are not required to set a secure password in the initial setup phase, it is likely that a large number of devices are accessible via these default credentials. Even if a customer has changed the default admin password, SEC Consult discovered there is an undocumented user with the name “default,” whose password is “tluafed” (default in reverse). While this user account can’t change system settings, it is still able to view any video streams. Normally, hardware devices are secured against unauthorized software updates by requiring that any new software pushed to the devices be digitally signed with a secret cryptographic key that is held only by the hardware or software maker. However, XMEye-enabled devices have no such protections. In fact, the researchers found it was trivial to set up a system that mimics the XMEye cloud and push malicious firmware updates to any device. Worse still, unlike with the Mirai malware — which gets permanently wiped from memory when an infected device powers off or is rebooted — the update method devised by SEC Consult makes it so that any software uploaded survives a reboot. CAN XIONGMAI REALLY BE THAT BAD?In the wake of the Mirai botnet’s emergence in 2016 and the subsequent record denial-of-service attacks that brought down chunks of the Internet at a time (including this Web site and my DDoS protection provider at times), multiple security firms said Xiongmai’s insecure products were a huge contributor to the problem. Among the company’s strongest critics was New York City-based security firm Flashpoint, which pointed out that even basic security features built into Xiongmai’s hardware had completely failed at basic tasks. For example, Flashpoint’s analysts discovered that the login page for a camera or DVR running Xiongmai hardware and software could be bypassed just by navigating to a page called “DVR.htm” prior to login. Flashpoint’s researchers also found that any changes to passwords for various user accounts accessible via the Web administration page for Xiongmai products did nothing to change passwords for accounts that were hard-coded into these devices and accessible only via more obscure, command-line communications interfaces like Telnet and SSH. Not long after Xiongmai was publicly shamed for failing to fix obvious security weaknesses that helped contribute to the spread of Mirai and related IoT botnets, Xiongmai lashed out at multiple security firms and journalists, promising to sue its critics for defamation (it never followed through on that threat, as far as I can tell). At the same time, Xiongmai promised that it would be issuing a product recall on millions of devices to ensure they were not deployed with insecure settings and software. But according to Flashpoint’s Zach Wikholm, Xiongmai never followed through with the recall, either. Rather, it was all a way for the company to save face publicly and with its business partners. “This company said they were going to do a product recall, but it looks like they never got around to it,” Wikholm said. “They were just trying to cover up and keep moving.” Wikholm said Flashpoint discovered a number of additional glaring vulnerabilities in Xiongmai’s hardware and software that left them wide open to takeover by malicious hackers, and that several of those weaknesses still exist in the company’s core product line. “We could have kept releasing our findings, but it just got really difficult to keep doing that because Xiongmai wouldn’t fix them and it would only make it easier for people to compromise these devices,” Wikholm said. The Flashpoint analyst said he believes SEC Consult’s estimates of the number of vulnerable Xiongmai devices to be extremely conservative. “Nine million devices sounds quite low because these guys hold 25 percent of the world’s DVR market,” to say nothing of the company’s share in the market for cheapo IP cameras, Wikholm said. What’s more, he said, Xiongmai has turned a deaf ear to reports about dangerous security holes across its product lines principally because it doesn’t answer directly to customers who purchase the gear. “The only reason they’ve maintained this level of [not caring] is they’ve been in this market for a long time and established very strong regional sales channels to dozens of third-party companies,” that ultimately rebrand Xiongmai’s products as their own, he said. Also, the typical consumer of cheap electronics powered by Xiongmai’s kit don’t really care how easily these devices can be commandeered by cybercriminals, Wikholm observed. “They just want a security system around their house or business that doesn’t cost an arm and leg, and Xiongmai is by far the biggest player in that space,” he said. “Most companies at least have some sort of incentive to make things better when faced with public pressure. But they don’t seem to have that drive.” A PHANTOM MENACESEC Consult concluded its technical advisory about the security flaws by saying Xiongmai “does not provide any mitigations and hence it is recommended not to use any products associated with the XMeye P2P Cloud until all of the identified security issues have been fixed and a thorough security analysis has been performed by professionals.” While this may sound easy enough, acting on that advice is difficult in practice because very few devices made with Xiongmai’s deeply flawed hardware and software advertise that fact on the label or product name. Rather, the components that Xiongmai makes are sold downstream to vendors who then use it in their own products and slap on a label with their own brand name. How many vendors? It’s difficult to say for sure, but a search on the term XMEye via the e-commerce sites where Xiongmai’s white-labeled products typically are sold (Amazon, Aliexpress.com, Homedepot.com and Walmart) reveals more than 100 companies that you’ve probably never heard of which brand Xionmai’s hardware and software as their own. That list is available here (PDF) and is also pasted at the conclusion of this post for the benefit of search engines. SEC Consult’s technical advisory about their findings lists a number of indicators that system and network administrators can use to quickly determine whether any of these vulnerable P2P Xiongmai devices happen to be on your network. For end users concerned about this, one way of fingerprinting Xiongmai devices is to search Amazon.com, aliexpress.com, walmart.com and other online merchants for the brand on the side of your device and the term “XMEye.” If you get a hit, chances are excellent you’ve got a device built on Xionmai’s technology. Another option: open a browser and navigate to the local Internet address of your device. If you have one of these devices on your local network, the login page should look like the one below: Another giveaway on virtually all Xiongmai devices is pasting “http://IP/err.htm” into a browser address bar should display the following error message (where IP= the local IP address of the device): According to SEC Consult, Xiongmai’s electronics and hardware make up the guts of IP cameras and DVRs marketed and sold under the company names below. What’s most remarkable about many of the companies listed below is that about half of them don’t even have their own Web sites, and instead simply rely on direct-to-consumer product listings at Amazon.com or other e-commerce outlets. Among those that do sell Xiongmai’s products directly via the Web, very few of them seem to even offer secure (https://) Web sites. SEC Consult’s blog post about their findings has more technical details, as does the security advisory they released today. Here’s the current list of companies that white label Xiongmai’s insecure products, according to SEC Consult: 9Trading from https://krebsonsecurity.com/2018/10/naming-shaming-web-polluters-xiongmai/ From time to time, there emerge cybersecurity stories of such potential impact that they have the effect of making all other security concerns seem minuscule and trifling by comparison. Yesterday was one of those times. Bloomberg Businessweek on Thursday published a bombshell investigation alleging that Chinese cyber spies had used a U.S.-based tech firm to secretly embed tiny computer chips into electronic devices purchased and used by almost 30 different companies. There aren’t any corroborating accounts of this scoop so far, but it is both fascinating and terrifying to look at why threats to the global technology supply chain can be so difficult to detect, verify and counter. In the context of computer and Internet security, supply chain security refers to the challenge of validating that a given piece of electronics — and by extension the software that powers those computing parts — does not include any extraneous or fraudulent components beyond what was specified by the company that paid for the production of said item. In a nutshell, the Bloomberg story claims that San Jose, Calif. based tech giant Supermicro was somehow caught up in a plan to quietly insert a rice-sized computer chip on the circuit boards that get put into a variety of servers and electronic components purchased by major vendors, allegedly including Amazon and Apple. The chips were alleged to have spied on users of the devices and sent unspecified data back to the Chinese military. It’s critical to note up top that Amazon, Apple and Supermicro have categorically denied most of the claims in the Bloomberg piece. That is, their positions refuting core components of the story would appear to leave little wiggle room for future backtracking on those statements. Amazon also penned a blog post that more emphatically stated their objections to the Bloomberg piece. Nevertheless, Bloomberg reporters write that “the companies’ denials are countered by six current and former senior national security officials, who—in conversations that began during the Obama administration and continued under the Trump administration—detailed the discovery of the chips and the government’s investigation.” The story continues:
Many readers have asked for my take on this piece. I heard similar allegations earlier this year about Supermicro and tried mightily to verify them but could not. That in itself should be zero gauge of the story’s potential merit. After all, I am just one guy, whereas this is the type of scoop that usually takes entire portions of a newsroom to research, report and vet. By Bloomberg’s own account, the story took more than a year to report and write, and cites 17 anonymous sources as confirming the activity. Most of what I have to share here is based on conversations with some clueful people over the years who would probably find themselves confined to a tiny, windowless room for an extended period if their names or quotes ever showed up in a story like this, so I will tread carefully around this subject. The U.S. Government isn’t eager to admit it, but there has long been an unofficial inventory of tech components and vendors that are forbidden to buy from if you’re in charge of procuring products or services on behalf of the U.S. Government. Call it the “brown list, “black list,” “entity list” or what have you, but it’s basically an indelible index of companies that are on the permanent Shit List of Uncle Sam for having been caught pulling some kind of supply chain shenanigans. More than a decade ago when I was a reporter with The Washington Post, I heard from an extremely well-placed source that one Chinese tech company had made it onto Uncle Sam’s entity list because they sold a custom hardware component for many Internet-enabled printers that secretly made a copy of every document or image sent to the printer and forwarded that to a server allegedly controlled by hackers aligned with the Chinese government. That example gives a whole new meaning to the term “supply chain,” doesn’t it? If Bloomberg’s reporting is accurate, that’s more or less what we’re dealing with here in Supermicro as well. But here’s the thing: Even if you identify which technology vendors are guilty of supply-chain hacks, it can be difficult to enforce their banishment from the procurement chain. One reason is that it is often tough to tell from the brand name of a given gizmo who actually makes all the multifarious components that go into any one electronic device sold today. Take, for instance, the problem right now with insecure Internet of Things (IoT) devices — cheapo security cameras, Internet routers and digital video recorders — sold at places like Amazon and Walmart. Many of these IoT devices have become a major security problem because they are massively insecure by default and difficult if not also impractical to secure after they are sold and put into use. For every company in China that produces these IoT devices, there are dozens of “white label” firms that market and/or sell the core electronic components as their own. So while security researchers might identify a set of security holes in IoT products made by one company whose products are white labeled by others, actually informing consumers about which third-party products include those vulnerabilities can be extremely challenging. In some cases, a technology vendor responsible for some part of this mess may simply go out of business or close its doors and re-emerge under different names and managers. Mind you, there is no indication anyone is purposefully engineering so many of these IoT products to be insecure; a more likely explanation is that building in more security tends to make devices considerably more expensive and slower to market. In many cases, their insecurity stems from a combination of factors: They ship with every imaginable feature turned on by default; they bundle outdated software and firmware components; and their default settings are difficult or impossible for users to change. We don’t often hear about intentional efforts to subvert the security of the technology supply chain simply because these incidents tend to get quickly classified by the military when they are discovered. But the U.S. Congress has held multiple hearings about supply chain security challenges, and the U.S. government has taken steps on several occasions to block Chinese tech companies from doing business with the federal government and/or U.S.-based firms. Most recently, the Pentagon banned the sale of Chinese-made ZTE and Huawei phones on military bases, according to a Defense Department directive that cites security risks posed by the devices. The U.S. Department of Commerce also has instituted a seven-year export restriction for ZTE, resulting in a ban on U.S. component makers selling to ZTE. Still, the issue here isn’t that we can’t trust technology products made in China. Indeed there are numerous examples of other countries — including the United States and its allies — slipping their own “backdoors” into hardware and software products. Like it or not, the vast majority of electronics are made in China, and this is unlikely to change anytime soon. The central issue is that we don’t have any other choice right now. The reason is that by nearly all accounts it would be punishingly expensive to replicate that manufacturing process here in the United States. Even if the U.S. government and Silicon Valley somehow mustered the funding and political will to do that, insisting that products sold to U.S. consumers or the U.S. government be made only with components made here in the U.S.A. would massively drive up the cost of all forms of technology. Consumers would almost certainly balk at buying these way more expensive devices. Years of experience has shown that consumers aren’t interested in paying a huge premium for security when a comparable product with the features they want is available much more cheaply. Indeed, noted security expert Bruce Schneier calls supply-chain security “an insurmountably hard problem.” “Our IT industry is inexorably international, and anyone involved in the process can subvert the security of the end product,” Schneier wrote in an opinion piece published earlier this year in The Washington Post. “No one wants to even think about a US-only anything; prices would multiply many times over. We cannot trust anyone, yet we have no choice but to trust everyone. No one is ready for the costs that solving this would entail.” The Bloomberg piece also addresses this elephant in the room:
Another huge challenge of securing the technology supply chain is that it’s quite time consuming and expensive to detect when products may have been intentionally compromised during some part of the manufacturing process. Your typical motherboard of the kind produced by a company like Supermicro can include hundreds of chips, but it only takes one hinky chip to subvert the security of the entire product. Also, most of the U.S. government’s efforts to police the global technology supply chain seem to be focused on preventing counterfeits — not finding secretly added spying components. Finally, it’s not clear that private industry is up to the job, either. At least not yet. “In the three years since the briefing in McLean, no commercially viable way to detect attacks like the one on Supermicro’s motherboards has emerged—or has looked likely to emerge,” the Bloomberg story concludes. “Few companies have the resources of Apple and Amazon, and it took some luck even for them to spot the problem. ‘This stuff is at the cutting edge of the cutting edge, and there is no easy technological solution,’ one of the people present in McLean says. ‘You have to invest in things that the world wants. You cannot invest in things that the world is not ready to accept yet.'” For my part, I try not to spin my wheels worrying about things I can’t change, and the supply chain challenges definitely fit into that category. I’ll have some more thoughts on the supply chain problem and what we can do about it in an interview to be published next week. But for the time being, there are some things worth thinking about that can help mitigate the threat from stealthy supply chain hacks. Writing for this week’s newsletter put out by the SANS Institute, a security training company based in Bethesda, Md., editorial board member William Hugh Murray has a few provocative thoughts:
from https://krebsonsecurity.com/2018/10/supply-chain-security-is-the-whole-enchilada-but-whos-willing-to-pay-for-it/ A ridiculous number of companies are exposing some or all of their proprietary and customer data by putting it in the cloud without any kind of authentication needed to read, alter or destroy it. When cybercriminals are the first to discover these missteps, usually the outcome is a demand for money in return for the stolen data. But when these screw-ups are unearthed by security professionals seeking to make a name for themselves, the resulting publicity often can leave the breached organization wishing they’d instead been quietly extorted by anonymous crooks. Last week, I was on a train from New York to Washington, D.C. when I received a phone call from Vinny Troia, a security researcher who runs a startup in Missouri called NightLion Security. Troia had discovered that All American Entertainment, a speaker bureau which represents a number of celebrities who also can be hired to do public speaking, had exposed thousands of speaking contracts via an unsecured Amazon cloud instance. The contracts laid out how much each speaker makes per event, details about their travel arrangements, and any requirements or obligations stated in advance by both parties to the contract. No secret access or password was needed to view the documents. It was a juicy find to be sure: I can now tell you how much Oprah makes per event (it’s a lot). Ditto for Gwyneth Paltrow, Olivia Newton John, Michael J. Fox and a host of others. But I’m not going to do that. Firstly, it’s nobody’s business what they make. More to the point, All American also is my speaker bureau, and included in the cache of documents the company exposed in the cloud were some of my speaking contracts. In fact, when Troia called about his find, I was on my way home from one such engagement. I quickly informed my contact at All American and asked them to let me know the moment they confirmed the data was removed from the Internet. While awaiting that confirmation, my pent-up frustration seeped into a tweet that seemed to touch a raw nerve among others in the security industry. The same day I alerted them, All American took down its bucket of unsecured speaker contract data, and apologized profusely for the oversight (although I have yet to hear a good explanation as to why this data needed to be stored in the cloud to begin with). This was hardly the first time Troia had alerted me about a huge cache of important or sensitive data that companies have left exposed online. On Monday, TechCrunch broke the story about a “breach” at Apollo, a sales engagement startup boasting a database of more than 200 million contact records. Calling it a breach seems a bit of a stretch; it probably would be more accurate to describe the incident as a data leak. Just like my speaker bureau, Apollo had simply put all this data up on an Amazon server that anyone on the Internet could access without providing a password. And Troia was again the one who figured out that the data had been leaked by Apollo — the result of an intensive, months-long process that took some extremely interesting twists and turns. That journey — which I will endeavor to describe here — offered some uncomfortable insights into how organizations frequently learn about data leaks these days, and indeed whether they derive any lasting security lessons from the experience at all. It also gave me a new appreciation for how difficult it can be for organizations that screw up this way to tell the difference between a security researcher and a bad guy. THE DARK OVERLORDI began hearing from Troia almost daily beginning in mid-2017. At the time, he was on something of a personal mission to discover the real-life identity behind The Dark Overlord (TDO), the pseudonym used by an individual or group of criminals who have been extorting dozens of companies — particularly healthcare providers — after hacking into their systems and stealing sensitive data. The Dark Overlord’s method was roughly the same in each attack. Gain access to sensitive data (often by purchasing access through crimeware-as-a-service offerings), and send a long, rambling ransom note to the victim organization demanding tens of thousands of dollars in Bitcoin for the safe return of said data. Victims were typically told that if they refused to pay, the stolen data would be sold to cybercriminals lurking on Dark Web forums. Worse yet, TDO also promised to make sure the news media knew that victim organizations were more interested in keeping the breach private than in securing the privacy of their customers or patients. In fact, the apparent ringleader of TDO reached out to KrebsOnSecurity in May 2016 with a remarkable offer. Using the nickname “Arnie,” the public voice of TDO said he was offering exclusive access to news about their latest extortion targets. Arnie claimed he was an administrator or key member on several top Dark Web forums, and provided a handful of convincing clues to back up his claim. He told me he had real-time access to dozens of healthcare organizations they’d hacked into, and that each one which refused to give in to TDO’s extortion demands could turn into a juicy scoop for KrebsOnSecurity. Arnie said he was coming to me first with the offer, but that he was planning to approach other journalists and news outlets if I declined. I balked after discovering that Arnie wasn’t offering this access for free: He wanted 10 bitcoin in exchange for exclusivity (at the time, his asking price was roughly equivalent to USD $5,000). Perhaps other news outlets are accustomed to paying for scoops, but that is not something I would ever consider. And in any case the whole thing was starting to smell like a shakedown or scam. I declined the offer. It’s possible other news outlets or journalists did not; I will not speculate on this matter further, other than to say readers can draw their own conclusions based on the timeline and the public record. WHO IS SOUNDCARD?Fast-forward to September 2017, and Troia was contacting me almost daily to share tidbits of research into email addresses, phone numbers and other bits of data apparently tied to TDO’s communications with victims and their various identities on Dark Web forums. His research was exhaustive and occasionally impressive, and for a while I caught the TDO bug and became engaged in a concurrent effort to learn the identities of the TDO members. For better or worse, the results of that research will have to wait for another story and another time. At one point, Troia told me he’d gained acceptance on the Dark Web forum Kickass, using the hacker nickname “Soundcard“. He said he believed a presence on all of the forums TDO was active on was necessary for figuring out once and for all who was behind this brazen and very busy extortion group. Here is a screen shot Troia shared with me of Soundcard’s posting there, which concerned a July 2018 forum discussion thread about a data leak of 340 million records from Florida-based marketing firm Exactis. As detailed by Wired.com in June 2018, Troia had discovered this huge cache of data unprotected and sitting wide open on a cloud server, and ultimately traced it back to Exactis. After several weeks of comparing notes about TDO with Troia, I learned that he was telling random people that we were “working together,” and that he was throwing my name around to various security industry sources and friends as a way of gaining access to new sources of data. I respectfully told Troia that this was not okay — that I never told people about our private conversations (or indeed that we spoke at all) — and I asked him to stop doing that. He apologized, said he didn’t understand he’d overstepped certain boundaries, and that it would never happen again. But it would. Multiple times. Here’s one time that really stood out for me. Earlier this summer, Troia sent me a link to a database of truly staggering size — nearly 10 terabytes of data — that someone had left open to anyone via a cloud instance. Again, no authentication or password was needed to access the information. At first glance, it appeared to be LinkedIn profile data. Working off that assumption, I began a hard target search of the database for specific LinkedIn profiles of important people. I first used the Web to locate the public LinkedIn profile pages for nearly all of the CEOs of the world’s top 20 largest companies, and then searched those profile names in the database that Troia had discovered. Suddenly, I had the cell phone numbers, addresses, email addresses and other contact data for some of the most powerful people in the world. Immediately, I reached out to contacts at LinkedIn and Microsoft (which bought LinkedIn in 2016) and arranged a call to discuss the findings. LinkedIn’s security team told me the data I was looking at was in fact an amalgamation of information scraped from LinkedIn and dozens of public sources, and being sold by the same firm that was doing the scraping and profile collating. LinkedIn declined to name that company, and it has not yet responded to follow-up questions about whether the company it was referring to was Apollo. Sure enough, a closer inspection of the database revealed the presence of other public data sources, including startup web site AngelList, Facebook, Salesforce, Twitter, and Yelp, among others. Several other trusted sources I approached with samples of data spliced from the nearly 10 TB trove of data Troia found in the cloud said they believed LinkedIn’s explanation, and that the data appeared to have been scraped off the public Internet from a variety of sources and combined into a single database. I told Troia it didn’t look like the data came exclusively from LinkedIn, or at least wasn’t stolen from them, and that all indications suggested it was a collection of data scraped from public profiles. He seemed unconvinced. Several days after my second call with LinkedIn’s security team — around Aug. 15 — I was made aware of a sales posting on the Kickass crime forum by someone selling what they claimed was “all of the LinkedIN user-base.” The ad, a blurry, partial screenshot of which can be seen below, was posted by the Kickass user Soundcard. The text of the sales thread was as follows:
The “sample data” included in the sales thread was from my records in this huge database, although Soundcard said he had sanitized certain data elements from this snippet. He explained his reasoning for that in a short Q&A from his sales thread: Question 1: Why you sanitize Brian Krebs’ information in sample? Answer 1: Because nothing in life free. This only to show i have data. I soon confronted Troia not only for offering to sell leaked data on the Dark Web, but also for once again throwing my name around in his various activities — despite past assurances that he would not. Also, his actions had boxed me into a corner: Any plans I had to credit him in a story for eventually helping to determine the source of the leaked data (which we now know to be Apollo) became more complicated without also explaining his Dark Web alter ego as Soundcard, and I am not in the habit of omitting such important details from stories. Troia assured me that he never had any intention of selling the data, and that the whole thing had been a ruse to help smoke out some of the suspected TDO members. For its part, LinkedIn’s security team was not amused, and published a short post to its media page denying that the company had suffered a security breach. “We want our members to know that a recent claim of a LinkedIn data breach is not accurate,” the company wrote. “Our investigation into this claim found that a third-party sales intelligence company that is not associated with LinkedIn was compromised and exposed a large set of data aggregated from a number of social networks, websites, and the company’s own customers. It also included a limited set of publicly available data about LinkedIn members, such as profile URL, industry and number of connections. This was not a breach of LinkedIn.” It is quite a fine line to walk when self-styled security researchers mimic cyber criminals in the name of making things more secure. On the one hand, reaching out to companies that are inadvertently exposing sensitive data and getting them to secure it or pull it offline altogether is a worthwhile and often thankless effort, and clearly many organizations still need a lot of help in this regard. On the other hand, most organizations that fit this description simply lack the security maturity to tell the difference between someone trying to make the Internet a safer place and someone trying to sell them a product or service. As a result, victim organizations tend to react with deep suspicion or even hostility to legitimate researchers and security journalists who alert them about a data breach or leak. And stunts like the ones described above tend to have the effect of deepening that suspicion, and sowing fear, uncertainty and doubt about the security industry as a whole. from https://krebsonsecurity.com/2018/10/when-security-researchers-pose-as-cybercrooks-who-can-tell-the-difference/ Most of us have been trained to be wary of clicking on links and attachments that arrive in emails unexpected, but it’s easy to forget scam artists are constantly dreaming up innovations that put a new shine on old-fashioned telephone-based phishing scams. Think you’re too smart to fall for one? Think again: Even technology experts are getting taken in by some of the more recent schemes (or very nearly). Matt Haughey is the creator of the community Weblog MetaFilter and a writer at Slack. Haughey banks at a small Portland credit union, and last week he got a call on his mobile phone from an 800-number that matched the number his credit union uses. Actually, he got three calls from the same number in rapid succession. He ignored the first two, letting them both go to voicemail. But he picked up on the third call, thinking it must be something urgent and important. After all, his credit union had rarely ever called him. Haughey said he was greeted by a female voice who explained that the credit union had blocked two phony-looking charges in Ohio made to his debit/ATM card. She proceeded to then read him the last four digits of the card that was currently in his wallet. It checked out. Haughey told the lady that he would need a replacement card immediately because he was about to travel out of state to California. Without missing a beat, the caller said he could keep his card and that the credit union would simply block any future charges that weren’t made in either Oregon or California. This struck Haughey as a bit off. Why would the bank say they were freezing his card but then say they could keep it open for his upcoming trip? It was the first time the voice inside his head spoke up and said, “Something isn’t right, Matt.” But, he figured, the customer service person at the credit union was trying to be helpful: She was doing him a favor, he reasoned. The caller then read his entire home address to double check it was the correct destination to send a new card at the conclusion of his trip. Then the caller said she needed to verify his mother’s maiden name. The voice in his head spoke out in protest again, but then banks had asked for this in the past. He provided it. Next she asked him to verify the three digit security code printed on the back of his card. Once more, the voice of caution in his brain was silenced: He’d given this code out previously in the few times he’d used his card to pay for something over the phone. Then she asked him for his current card PIN, just so she could apply that same PIN to the new card being mailed out, she assured him. Ding, ding, ding went the alarm bells in his head. Haughey hesitated, then asked the lady to repeat the question. When she did, he gave her the PIN, and she assured him she’d make sure his existing PIN also served as the PIN for his new card. Haughey said after hanging up he felt fairly certain the entire transaction was legitimate, although the part about her requesting the PIN kept nagging at him. “I balked at challenging her because everything lined up,” he said in an interview with KrebsOnSecurity. “But when I hung up the phone and told a friend about it, he was like, ‘Oh man, you just got scammed, there’s no way that’s real.'” Now more concerned, Haughey visited his credit union to make sure his travel arrangements were set. When he began telling the bank employee what had transpired, he could tell by the look on her face that his friend was right. A review of his account showed that there were indeed two fraudulent charges on his account from earlier that day totaling $3,400, but neither charge was from Ohio. Rather, someone used a counterfeit copy of his debit card to spend more than $2,900 at a Krogers near Atlanta, and to withdraw almost $500 from an ATM in the same area. After the unauthorized charges, he had just $300 remaining in his account. “People I’ve talked to about this say there’s no way they’d fall for that, but when someone from a trustworthy number calls, says they’re from your small town bank, and sounds incredibly professional, you’d fall for it, too,” Haughey said. Fraudsters can use a variety of open-source and free tools to fake or “spoof” the number displayed as the caller ID, lending legitimacy to phone phishing schemes. Often, just sprinkling in a little foreknowledge of the target’s personal details — SSNs, dates of birth, addresses and other information that can be purchased for a nominal fee from any one of several underground sites that sell such data — adds enough detail to the call to make it seem legitimate. A CLOSE CALLCabel Sasser is founder of a Mac and iOS software company called Panic Inc. Sasser said he almost got scammed recently after receiving a call that appeared to be the same number as the one displayed on the back of his Wells Fargo ATM card. “I answered, and a Fraud Department agent said my ATM card has just been used at a Target in Minnesota, was I on vacation?” Sasser recalled in a tweet about the experience. What Sasser didn’t mentioned in his tweet was that his corporate debit card had just been hit with two instances of fraud: Someone had charged $10,000 worth of metal air ducts to his card. When he disputed the charge, his bank sent a replacement card. “I used the new card at maybe four places and immediately another fraud charge popped up for like $20,000 in custom bathtubs,” Sasser recalled in an interview with KrebsOnSecurity. “The morning this scam call came in I was spending time trying to figure out who might have lost our card data and was already in that frame of mind when I got the call about fraud on my card.” And so the card-replacement dance began. “Is the card in your possession?,” the caller asked. It was. The agent then asked him to read the three-digit CVV code printed on the back of his card. After verifying the CVV, the agent offered to expedite a replacement, Sasser said. “First he had to read some disclosures. Then he asked me to key in a new PIN. I picked a random PIN and entered it. Verified it again. Then he asked me to key in my current PIN.” That made Sasser pause. Wouldn’t an actual representative from Wells Fargo’s fraud division already have access to his current PIN? “It’s just to confirm the change,” the caller told him. “I can’t see what you enter.” “But…you’re the bank,” he countered. “You have my PIN, and you can see what I enter…” The caller had a snappy reply for this retort as well. “Only the IVR [interactive voice response] system can see it,” the caller assured him. “Hey, if it helps, I have all of your account info up…to confirm, the last four digits of your Social Security number are XXXX, right?” Sure enough, that was correct. But something still seemed off. At this point, Sasser said he told the agent he would call back by dialing the number printed on his ATM card — the same number his mobile phone was already displaying as the source of the call. After doing just that, the representative who answered said there had been no such fraud detected on his account. “I was just four key presses away from having all my cash drained by someone at an ATM,” Sasser recalled. A visit to the local Wells Fargo branch before his trip confirmed that he’d dodged a bullet. “The Wells person was super surprised that I bailed out when I did, and said most people are 100 percent taken by this scam,” Sasser said. HUMAN, ROBOT OR HYBRID?In Sasser’s case, the scammer was a live person, but some equally convincing voice phishing schemes use a combination of humans and automation. Consider the following vishing attempt, reported to KrebsOnSecurity in August by “Curt,” a longtime reader from Canada. “I’m both a TD customer and Rogers phone subscriber and just experienced what I consider a very convincing and/or elaborate social engineering/vishing attempt,” Curt wrote. “At 7:46pm I received a call from (647-475-1636) purporting to be from Credit Alert (alertservice.ca) on behalf of TD Canada Trust offering me a free 30-day trial for a credit monitoring service.” The caller said her name was Jen Hansen, and began the call with what Curt described as “over-the-top courtesy.” “It sounded like a very well-scripted Customer Service call, where they seem to be trying so hard to please that it seems disingenuous,” Curt recalled. “But honestly it still sounded very much like a real person, not like a text to speech voice which sounds robotic. This sounded VERY natural.” Ms. Hansen proceeded to tell Curt that TD Bank was offering a credit monitoring service free for one month, and that he could cancel at any time. To enroll, he only needed to confirm his home mailing address. “I’m mega paranoid (I read krebsonsecurity.com daily) and asked her to tell me what address I had on their file, knowing full well my home address can be found in a variety of ways,” Curt wrote in an email to this author. “She said, ‘One moment while I access that information.'” After a short pause, a new voice came on the line. “And here’s where I realized I was finally talking to a real human — a female with a slight French accent — who read me my correct address,” Curt recalled. After another pause, Ms. Hansen’s voice came back on the line. While she was explaining that part of the package included free antivirus and anti-keylogging software, Curt asked her if he could opt-in to receive his credit reports while opting-out of installing the software. “I’m sorry, can you repeat that?” the voice identifying itself as Ms. Hansen replied. Curt repeated himself. After another, “I’m sorry, can you repeat that,” Curt asked Ms. Hansen where she was from. The voice confirmed what was indicated by the number displayed on his caller ID: That she was calling from Barry, Ontario. Trying to throw the robot voice further off-script, Curt asked what the weather was like in Barry, Ontario. Another Long pause. The voice continued describing the offered service. “I asked again about the weather, and she said, ‘I’m sorry, I don’t have that information. Would you like me to transfer you to someone that does?’ I said yes and again the real person with a French accent started speaking, ignoring my question about the weather and saying that if I’d like to continue with the offer I needed to provide my date of birth. This is when I hung up and immediately called TD Bank.” No one from TD had called him, they assured him. FULLY AUTOMATED PHONE PHISHINGAnd then there are the fully-automated voice phishing scams, which can be be equally convincing. Last week I heard from “Jon,” a cybersecurity professional with more than 30 years of experience under his belt (Jon asked to leave his last name out of this story). Answering a call on his mobile device from a phone number in Missouri, Jon was greeted with the familiar four-note AT&T jingle, followed by a recorded voice saying AT&T was calling to prevent his phone service from being suspended for non-payment. “It then prompted me to enter my security PIN to be connected to a billing department representative,” Jon said. “My number was originally an AT&T number (it reports as Cingular Wireless) but I have been on T-Mobile for several years, so clearly a scam if I had any doubt. However, I suspect that the average Joe would fall for it.” WHAT CAN YOU DO?Just as you would never give out personal information if asked to do so via email, never give out any information about yourself in response to an unsolicited phone call. Phone phishing, like email scams, usually invokes an element of urgency in a bid to get people to let their guard down. If call has you worried that there might be something wrong and you wish to call them back, don’t call the number offered to you by the caller. If you want to reach your bank, call the number on the back of your card. If it’s another company you do business with, go to the company’s site and look up their main customer support number. Unfortunately, this may take a little work. It’s not just banks and phone companies that are being impersonated by fraudsters. Reports on social media suggest many consumers also are receiving voice phishing scams that spoof customer support numbers at Apple, Amazon and other big-name tech companies. In many cases, the scammers are polluting top search engine results with phony 800-numbers for customer support lines that lead directly to fraudsters. These days, scam calls happen on my mobile so often that I almost never answer my phone unless it appears to come from someone in my contacts list. The Federal Trade Commission’s do-not-call list does not appear to have done anything to block scam callers, and the major wireless carriers seem to be pretty useless in blocking incessant robocalls, even when the scammers are impersonating the carriers themselves, as in Jon’s case above. I suspect people my age (mid-40s) and younger also generally let most unrecognized calls go to voicemail. It seems to be a very different reality for folks from an older generation, many of whom still primarily call friends and family using land lines, and who will always answer a ringing phone whenever it is humanly possible to do so. It’s a good idea to advise your loved ones to ignore calls unless they appear to come from a friend or family member, and to just hang up the moment the caller starts asking for personal information. from https://krebsonsecurity.com/2018/10/voice-phishing-scams-are-getting-more-clever/ |
ABOUT MEHi my name is Anthony I am 32 years old from Houston. I am working in local store selling electronic devices. I have been interested in eclectronics since childhood and I like to reacd about it. Archives
April 2019
Categories |