Cybercriminals are increasingly using an advanced method of hiding and sustaining their malicious Websites and botnet infrastructures -- dubbed "fast-flux" -- that could make them more difficult to detect, researchers say.
Criminal organizations behind two infamous malware families -- Warezov/Stration and Storm -- in the past few months have separately moved their infrastructures to so-called fast-flux service networks, according to the Honeynet Project & Research Alliance, which has released a new report on the emerging networks and techniques.
Fast-flux is basically load-balancing with a twist. It's a round-robin method where infected bot machines (typically home computers) serve as proxies or hosts for malicious Websites. These are constantly rotated, changing their DNS records to prevent their discovery by researchers, ISPs, or law enforcement.
"The purpose of this technique is to render the IP-based block list -- a popular tool for identifying malicious systems -- useless for preventing attacks," says Adam O'Donnell, director of emerging technologies at security vendor Cloudmark.
Researchers and ISPs have been aware of fast-flux for over a year, but there hasn't been an in-depth look at how it works until now. "All of this research on fast-flux is new. No one had any definitive research on it," says Ralph Logan, vice president of the Honeynet Project and principal of The Logan Group. "We saw a rising trend in illegal, malicious criminal activity here."
Fast-flux helps cybercriminals hide their content servers, including everything from fake online pharmacies, phishing sites, money mules, and adult content sites, Logan says. "This is to keep security professionals and ISPs from discovering and mitigating their illegal content."
The bad guys like fast-flux -- not only because it keeps them up and running, but also because it's more efficient than traditional methods of infecting multiple machines, which were easily discovered.
"The ISP would shut down my 100 machines, and then I'd have to infect 100 more to serve my content and relay my spam," Logan says. Fast-flux, however, lets hackers set up proxy servers that contact the "mother ship," which serves as command and control. It uses an extra layer of obfuscation between the victim (client) and the content machine, he says.
A domain has hundreds or thousands of IP addresses, all of which are rotated frequently -- so the proxy machines get rotated regularly, too -- some as often as every three minutes -- to avoid detection. "It's not a bunch of traffic to one node serving illegal code," Logan says.
"I send you a phishing email, you click on www.homepharmacy.com -- but it's really taking you to Grandma's PC on PacBell, which wakes up and says 'it's my turn now.' You'd have 100 different users coming to Grandma's PC for the next few minutes, and then Auntie Flo's PC gets command-and-controlled" next, Logan explains.
The home PC proxies are infected the usual way, through spam email, viruses, or other common methods, Logan says.
The Honeynet Project & Alliance set out a live honeypot to invite infection by a fast-flux service network. "Our honeypot can capture actual traffic between the mother ship and the end node," Logan says. The alliance is still studying the malicious code and behavior of the fast-flux network it has baited, he says.
What can be done about fast flux? ISPs and users should probe suspicious nodes and use intrusion detection systems; block TCP port 80 and UDP port 53; block access to mother ship and other controller machines when detected; "blackhole" DNS and BGP route-injection; and monitor DNS, the report says.
Cloudmark's O'Donnell says fast flux is just the latest method of survival for the bad guys: There are more to come. "Any technique that allows a malicious actor to keep his network online longer -- and reduce the probability of his messages and attacks being blocked -- will be used," he says. "This is just the latest of those techniques."
Wednesday, July 18, 2007
Tuesday, July 17, 2007
Be careful what you get from Google..!!
Searching for some free templates at google may bring you nasty things you wont have:
http://www.google.com/search?hl=en&q=kostenlose+vorlagen&btnG=Google+Search
Have a look at the first advertising link "Kostenlos-Vorlagen.info"
All files there (all the same) are detected as:
AntiVir 7.4.0.39 07.07.2007 TR/Spy.BZub.JD.1
F-Secure 6.70.13260.0 07.07.2007 W32/Malware
Ikarus T3.1.1.8 07.07.2007 Trojan-Spy.Win32.Goldun.lw
Kaspersky 4.0.2.24 07.07.2007 Trojan-Spy.Win32.BZub.jd
Microsoft 1.2704 07.07.2007 TrojanDropper:Win32/Small.OT
Norman 5.80.02 07.06.2007 W32/Malware
Sophos 4.19.0 07.06.2007 Mal/Binder-C
Webwasher-Gateway 6.0.1 07.07.2007 Trojan.Spy.BZub.JD.1
After executing, the malware drops a file named:
C:\WINDOWS\System32\ipv6monl.dll
It hooks as a BHO under CLSID:
HKEY_CLASSES_ROOT\CLSID\{36DBC179-A19F-48F2-B16A-6A3E19B42A87}
\InprocServer32
To do so it looks for activated Brwoser extensions:
HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main
"Enable Browser Extensions" = yes
It also ensure that the IE could bypass Windows Firewall:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\SharedAccess
\Parameters\FirewallPolicy\StandardProfile\AuthorizedApplications
\List "C:\Program Files\Internet Explorer\IEXPLORE.EXE" = C:\Program
Files\Internet Explorer\IEXPLORE.EXE:*:Enabled:Internet Explorer
The Keylogger function checks for banking logins end if recognized it logs this information and send it to a server.
http://www.google.com/search?hl=en&q=kostenlose+vorlagen&btnG=Google+Search
Have a look at the first advertising link "Kostenlos-Vorlagen.info"
All files there (all the same) are detected as:
AntiVir 7.4.0.39 07.07.2007 TR/Spy.BZub.JD.1
F-Secure 6.70.13260.0 07.07.2007 W32/Malware
Ikarus T3.1.1.8 07.07.2007 Trojan-Spy.Win32.Goldun.lw
Kaspersky 4.0.2.24 07.07.2007 Trojan-Spy.Win32.BZub.jd
Microsoft 1.2704 07.07.2007 TrojanDropper:Win32/Small.OT
Norman 5.80.02 07.06.2007 W32/Malware
Sophos 4.19.0 07.06.2007 Mal/Binder-C
Webwasher-Gateway 6.0.1 07.07.2007 Trojan.Spy.BZub.JD.1
After executing, the malware drops a file named:
C:\WINDOWS\System32\ipv6monl.dll
It hooks as a BHO under CLSID:
HKEY_CLASSES_ROOT\CLSID\{36DBC179-A19F-48F2-B16A-6A3E19B42A87}
\InprocServer32
To do so it looks for activated Brwoser extensions:
HKEY_CURRENT_USER\Software\Microsoft\Internet Explorer\Main
"Enable Browser Extensions" = yes
It also ensure that the IE could bypass Windows Firewall:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\SharedAccess
\Parameters\FirewallPolicy\StandardProfile\AuthorizedApplications
\List "C:\Program Files\Internet Explorer\IEXPLORE.EXE" = C:\Program
Files\Internet Explorer\IEXPLORE.EXE:*:Enabled:Internet Explorer
The Keylogger function checks for banking logins end if recognized it logs this information and send it to a server.
False Positives...
It appears that Symantec's anti-virus definitions (July 15th, rev 2) had a false positive on Filezilla and NASA World Wind, detecting them as Adware.cpush. The definition was fixed in the July 16th release. This isn't the first or last time false positives have shown up with anti-virus updates. As more and more malware gets developed and deployment of said malware gets quicker, the strain on AV vendors to get definitions out quickly is intense. This makes it difficult to test all software, especially the more esoteric variety. Test longer and allow more exploitation or get the definition out fast and possibly have false-positives or negatives? Not an easy question to answer (unless you tier definitions and customize updates so people can choose "stable" rules, "bleeding edge" rules, etc).
However, this leads to an interesting discussion. Could hackers make their malware such that the signatures tend to match safe files? This is already done in a sense with malware in the attempt to make the software appear as legitimate as possible on the network, it also tries to avoid heuristic detection. However, for typical signature detection this is not easy, it takes more than mindless polymorphism. However, the incentive for malware writers is for their malware to stay undetected for as long as possible. That means more targetting to avoid the honeynets, more subtlety to avoid network detection, and making the executables subtle to avoid AV software. Manipulating malware to maximize false-positives could be an entertaining (and certainly painful) way to wreak havoc. Some basic research exists on this theory already, though nothing ready for market.
However, this leads to an interesting discussion. Could hackers make their malware such that the signatures tend to match safe files? This is already done in a sense with malware in the attempt to make the software appear as legitimate as possible on the network, it also tries to avoid heuristic detection. However, for typical signature detection this is not easy, it takes more than mindless polymorphism. However, the incentive for malware writers is for their malware to stay undetected for as long as possible. That means more targetting to avoid the honeynets, more subtlety to avoid network detection, and making the executables subtle to avoid AV software. Manipulating malware to maximize false-positives could be an entertaining (and certainly painful) way to wreak havoc. Some basic research exists on this theory already, though nothing ready for market.
IT Security, The data Theft time Bomb
Despite the billions of dollars spent on information security products, the aggressive patching and repairing of operating systems and applications, and the heightened awareness of the need for computer users to guard against identity theft, most organizations aren't feeling any more secure than they were a year ago. InformationWeek Research's 10th annual Global Information Security survey, conducted with consulting firm Accenture, shows that two-thirds of 1,101 survey respondents in the United States and 89% of 1,991 respondents in China are feeling just as vulnerable to security attacks as last year, or more so.
Contributing to this unease is the perception that security technology has grown overly complex, to the point where it's contributing to the problem. The No. 1 security challenge identified by almost half of U.S. respondents is "managing the complexity of security." So-called "defense-in-depth" is just another way of saying "you've got a bunch of technologies that overlap and that don't handle security in a straightforward manner," says Alastair MacWillson, global managing director of Accenture's security practice. "It's like putting 20 locks on your door because you're not comfortable that any of them works."
Yet a case can be made that respondents aren't worried enough, particularly about lost and stolen company and customer data. Only one-third of U.S. survey respondents and less than half of those in China cite "preventing breaches" as their biggest security challenge. Only one-quarter of U.S. respondents rank either unauthorized employee access to files and data or theft of customer data by outsiders in their top three security priorities, and even fewer put the loss or theft of mobile devices containing corporate data or the theft of intellectual property in that category. This lack of urgency persists despite highly publicized--and highly embarrassing--data-loss incidents in the last year and a half involving retailer TJX, the Department of Veterans Affairs, and the Georgia Community Health Department, among many, many others.
Instead, as with last year, the top three security priorities are viruses or worms (65% of U.S. respondents, 75% in China), spyware and malware (56% and 61%), and spam (40% in both countries).
So are security pros focusing on the wrong things? Yes, says Jerry Dixon, director of Homeland Security's National Cyber Security Division. "You need to know where your data resides and who has access to it," Dixon says. "This speaks to the integrity of the data that resides in your databases, the data that you use to carry out your business."
When asked what security pros should be worried about, security researcher Bruce Schneier, CTO of service provider BT Counterpane, puts it this way: "Crime, crime, crime, and compliance."
It seems as though security pros are missing the point, choosing to focus on the security threats with which they're most familiar as opposed to emerging threats designed to cash in on the value of customer data and intellectual property. A careful reading of our survey's results, however, indicates that organizations are waking up to just how vulnerable their customer information and intellectual property are to data thieves.
For example, the No. 1 reason for feeling more vulnerable to attack this year, according to 70% of U.S. respondents, is the increased sophistication of threats, including SQL injections. A programming technique applied to Web site requests, SQL injections have one purpose: to steal information from databases accessed by Web applications.
The next three reasons for feeling vulnerable: more ways for corporate networks to be attacked (including wireless access points); increased volume of attacks; and more malicious intent on the part of attackers (i.e., theft, data destruction, and extortion). Our survey suggests that companies think they're being attacked less to bring down their networks--though that remains the primary outcome of cyberattacks--and more to have their assets (customer or enterprise data) stolen. Only 13% of U.S. respondents see denial-of-service or other network-impairing attacks as a top three priority, down from 26% a year ago. Chinese respondents were only marginally more concerned about denial- of-service attacks.
Some security pros may be blissfully ignorant. Botnets, which can take control of IT resources remotely and can be used to launch attacks or steal information, debut as a concern in this year's survey, though only 10% of U.S. respondents and 13% of Chinese respondents rank them as a top three problem. This may be because companies are often unaware that they've been infiltrated by botnets, which is exactly what bot herders are counting on.
Similarly, viruses, worms, and phishing are the top three types of security breaches reported by U.S. respondents. Seventh on the list: identity theft. But that doesn't mean that identity theft isn't a greater threat. Identity theft and fraud are worst-case scenarios for a company whose data has been compromised, but not having experienced them could be as much about luck as it is security. TJX was extremely unlucky in that some of the 45.7 million customer records stolen from its IT systems over the past few years surfaced earlier this year in Florida, where they were used to create fake credit cards and defraud several Wal-Mart stores of millions of dollars. By contrast, the VA, last year's poster child for data insecurity, lost 27 million records when a laptop was stolen from an employee's house, but so far no identity theft or fraud activities have been traced back to that security breach.
Here's another sign that data security is a growing concern: While U.S. respondents measure the value of their security investments first for their ability to cut the number of hours workers spend on security-related issues (43% of respondents), second in priority is how well these measures protect customer records (35%), and third is a decline in the number of breaches (33%).
Perhaps the most surprising stat of the entire survey is that nearly a quarter of U.S. respondents don't measure the value of their security investments at all.
As already mentioned, the most significant impact of cyberattacks is network downtime, followed by business apps, including e-mail, being rendered unavailable. Third on the impact list, as reported by a quarter of U.S. respondents and 41% in China, is information confidentiality being compromised. Fourth is "minor" financial losses, reported by 18% in the United States and 21% in China. They were the lucky ones.
The financial impact of security lapses is difficult to calculate in the short term, particularly when it involves the loss of data. In fact, the highest percentage of respondents admitting to a breach, 35% in the United States and 31% in China, say they don't know the total value of the loss they suffered.
In the long run, though, the security losses can be painfully obvious. TJX reported a $20 million computer intrusion-related charge for its third quarter, ended April 28. The loss to the Florida Wal-Marts: about $8 million in merchandise.
The shadowy underground of malicious hackers and cyberthieves has been responsible for some high-profile breaches over the past 12 months, and concerns over the next strike occupy most security pros' time. More than half of those surveyed cited computer hackers as the source of breaches or espionage at their companies within the past year, and more than a third suspect that malicious coders were responsible. Just as significant, though, breaches by unauthorized users rose in 2007, to 34% from 28% in 2006.
At BryanLGH Medical Center in Lincoln, Neb., CIO Rich Marreel's main security concern is protect- ing the organization's patient data, not just from malicious hackers but from employee misuse, whether intentional or not. "We're always concerned about people sharing their authentication credentials with someone else or with information leaving the organization via laptops or memory sticks," Marreel says. The solution: a combination of employee education and security technology, including encryption.
Without carefully managing user access rights, and demanding that users protect their login and password information, companies introduce "a hidden threat," Marreel says. At the hospital, for example, "a lot of people were writing down logins and passwords and carrying them around or even posting them on their PCs," he says. Haven't we heard this before?
U.S. and Chinese survey responses are similar, but different in many ways. For instance, exploiting known operating system vulnerabilities is the leading method of attack in both countries--43% of respondents in the United States and a whopping two-thirds in China say so. The same disproportionate response applies to the second leading attack method--known application vulnerabilities--where 41% of Chinese respondents' systems were compromised that way, as compared with less than a quarter in the United States. This could be the result of the large amount of pirated software used in China, says Accenture's MacWillson. "They don't have access to the patches," he points out (see story, "China's Evolutionary Leap" ).
Other popular methods of attack cited by respondents include falsified information in e-mail attachments (26% and 25%) and exploiting unknown operating system vulnerabilities (24% and 31%). Such intrusions, however, aren't the only concerns. Of the 804 U.S. respondents admitting to having experienced breaches or espionage in the past 12 months, 18% attribute the problem to unauthorized employees, and 16% suspect authorized users and employees.
But that's down from nearly 25% of companies reporting breaches in 2006. And that's surprising, because there's no getting around the fact that employees are a weak link in the security chain. Gary Min's attempted fleecing of his former employer, chemical company DuPont, included about $400 million worth of company trade secrets that he tried to turn over to a DuPont competitor before that company alerted the FBI. Hiding in plain sight, Min accessed an unusually high volume of abstracts and full-text .pdf documents off DuPont's Electronic Data Library server, one of the company's main databases for storing confidential and proprietary information. Min downloaded about 22,000 abstracts from the EDL and accessed about 16,706 documents--15 times the number of abstracts and reports accessed by the next highest user of the EDL for that period. Min pleaded guilty and faces up to 10 years in prison, a fine of $250,000, and restitution.
Besides outright fraud, employees fail to protect the data they have stored on their corporate IT assets, mainly their laptops. Laptops and portable storage devices are being stolen from employees' cars and homes in mind-boggling numbers. Last month, a backup computer storage device with the names and Social Security numbers of every employee in the state of Ohio--more than 64,000 records--was stolen from a state intern's car. Twelve months earlier, a laptop containing names, addresses, and credit and debit card information of 243,000 Hotels.com customers was stolen from an Ernst & Young employee's car in Texas. "Most of this is human error or bad business process," says Rhonda MacLean, CEO of consulting firm MacLean Risk Partners and former chief security officer at Bank of America and Boeing.
Similarly, only 5% of survey respondents cite contract service providers, consultants, or auditors as the source of their breaches. But that doesn't mean they shouldn't be concerned.
In April, the Georgia Department of Community Health reported the loss of 2.9 million records containing personal information, including full names, addresses, birth dates, Medicaid and children's health care recipient identification numbers, and Social Security numbers, when a computer disk went missing from service provider Affiliated Computer Services, which was contracted to handle health care claims for the state.
"If a partner or service provider has access to any of our data, we want a security paragraph written into our contract that gives us the right to perform a security audit against them and to perform these audits regularly," says Randy Barr, chief security officer of WebEx, a Web-conferencing company. Barr says that all contractors with access to company systems must undergo background checks, a policy since 2004.
You'd think that simply educating employees and partners about your company's security policies would be sufficient to keep generally honest people from letting customer information leak out through e-mails, instant messages, and peer-to-peer networks--but you'd be wrong. Sure, the No. 1 tactical security priority for U.S. companies in 2007, according to 37% of respondents, is creating and enhancing user awareness of policies. But that's down from 42% in 2006. A smaller percentage of U.S. companies also plan to install better access controls, monitoring software, and secure remote access systems. In China, companies are focusing on installing application firewalls, better access controls, and monitoring software.
Only 19% of respondents say that security technology and policy training will have a significant impact on alleviating employee-based security breaches, the same percentage as last year. "It takes more than showing them a few videos," consultant MacLean says. "You have to track employee training and make sure that employees finish with at least the basic understanding of what you want them to know."
Over the past 12 months, the change at Eisenhower Medical Center in Rancho Mirage, Calif., that's had the greatest impact on security is the health care organization's move from a paper-based to an electronic patient records system. "This put more responsibility on us to make sure the patient's data is secure," says CIO David Perez. "And it's not just the movement of the data online but the volume of that data makes it more challenging. A CAT scan a few years ago would provide 250 to 500 images, but our new system can produce up to 5,000 images."
As more and more physicians and medical staff log on to Eisenhower's intranet portal to do their work, Perez and his team must increase their monitoring for security problems and ensure that only the appropriate physicians and staff are accessing different medical records, as required by the Health Insurance Portability and Accountability Act. "Users sign a confidentiality statement when they join the medical center," Perez says. "We'll also post reminders on the employee portal."
Some companies prefer the Big Brother approach. Of the U.S. respondents who say their companies monitor employee activities, 51% monitor e-mail use, 40% monitor Web use, and 35% monitor phone use, roughly consistent with last year's findings. However, other sources of data leakage are given less attention: Only 29% monitor instant messaging use, 22% the opening of e-mail attachments, and 20% the contents of outbound e-mail messages. And only a handful keep a close eye on the use of portable storage devices.
Still, 42% of respondents say data leakage is bad enough that employees should be fined or punished in some way for their role in security breaches, once those employees have been trained. Consultant MacLean takes an even tougher tack: "Termination is pretty severe, but in some cases it's appropriate, as is civil or even criminal prosecution."
A significant number of respondents want to put the responsibility for porous security on the companies selling them security technology. Forty-five percent of U.S. companies and 47% of companies in China think security vendors should be held legally and financially liable for security vulnerabilities in their products and services.
Some of the unease about corporate IT security may stem from the fact that most companies don't have a centralized security executive assessing risks and threats and then calling the shots to address these concerns. The process for setting security policy in most companies is collaborative, and groups comprising the CIO, CEO, IT management, and security management all have input. Eisenhower Medical Center doesn't have a chief information security officer, instead relying on its general counsel to make regulatory compliance decisions, and on CIO Perez, working with system administrators, to set security policy. "We gather information from each director in each department to find out what systems and data they need access to," Perez says. "It's an interesting back and forth. The doctors want easy access, and we're trying to make it more secure."
The number of chief information security officers has grown significantly in the last year. Roughly three-quarters of survey respondents say their companies have CISOs, compared with 39% in 2006. CISOs predominantly report to the CEO or the CIO.
When it comes to the ultimate sign-off, however, half of U.S. companies say that the CEO determines security spending. In the United States, the greatest percentage of respondents, 37%, say their companies assess risks and threats without the input of a CISO, while an astounding 22% say they don't regularly assess security risks and threats at all.
In the United States, the portion of IT budgets devoted to security remains pretty flat; companies plan to spend an average of 12% this year, compared with 13% last year. China, on the other hand, is on a security spending spree: The average percentage of IT budget devoted to security this year is 19%, compared with 16% in 2006. It's interesting to note that 39% of U.S. companies and 55% in China expect 2007 security spending levels to surpass those in 2006.
If it all sounds overwhelming, don't panic. While information security has gotten more complex--as attackers alter both their methods and their targets, and companies layer more and more security products on top of each other--the good news is that the measures required to plug most security holes often come down to common sense, an increasingly important quality to look for in any employee or manager handling sensitive data.
Contributing to this unease is the perception that security technology has grown overly complex, to the point where it's contributing to the problem. The No. 1 security challenge identified by almost half of U.S. respondents is "managing the complexity of security." So-called "defense-in-depth" is just another way of saying "you've got a bunch of technologies that overlap and that don't handle security in a straightforward manner," says Alastair MacWillson, global managing director of Accenture's security practice. "It's like putting 20 locks on your door because you're not comfortable that any of them works."
Yet a case can be made that respondents aren't worried enough, particularly about lost and stolen company and customer data. Only one-third of U.S. survey respondents and less than half of those in China cite "preventing breaches" as their biggest security challenge. Only one-quarter of U.S. respondents rank either unauthorized employee access to files and data or theft of customer data by outsiders in their top three security priorities, and even fewer put the loss or theft of mobile devices containing corporate data or the theft of intellectual property in that category. This lack of urgency persists despite highly publicized--and highly embarrassing--data-loss incidents in the last year and a half involving retailer TJX, the Department of Veterans Affairs, and the Georgia Community Health Department, among many, many others.
Instead, as with last year, the top three security priorities are viruses or worms (65% of U.S. respondents, 75% in China), spyware and malware (56% and 61%), and spam (40% in both countries).
So are security pros focusing on the wrong things? Yes, says Jerry Dixon, director of Homeland Security's National Cyber Security Division. "You need to know where your data resides and who has access to it," Dixon says. "This speaks to the integrity of the data that resides in your databases, the data that you use to carry out your business."
When asked what security pros should be worried about, security researcher Bruce Schneier, CTO of service provider BT Counterpane, puts it this way: "Crime, crime, crime, and compliance."
It seems as though security pros are missing the point, choosing to focus on the security threats with which they're most familiar as opposed to emerging threats designed to cash in on the value of customer data and intellectual property. A careful reading of our survey's results, however, indicates that organizations are waking up to just how vulnerable their customer information and intellectual property are to data thieves.
For example, the No. 1 reason for feeling more vulnerable to attack this year, according to 70% of U.S. respondents, is the increased sophistication of threats, including SQL injections. A programming technique applied to Web site requests, SQL injections have one purpose: to steal information from databases accessed by Web applications.
The next three reasons for feeling vulnerable: more ways for corporate networks to be attacked (including wireless access points); increased volume of attacks; and more malicious intent on the part of attackers (i.e., theft, data destruction, and extortion). Our survey suggests that companies think they're being attacked less to bring down their networks--though that remains the primary outcome of cyberattacks--and more to have their assets (customer or enterprise data) stolen. Only 13% of U.S. respondents see denial-of-service or other network-impairing attacks as a top three priority, down from 26% a year ago. Chinese respondents were only marginally more concerned about denial- of-service attacks.
Some security pros may be blissfully ignorant. Botnets, which can take control of IT resources remotely and can be used to launch attacks or steal information, debut as a concern in this year's survey, though only 10% of U.S. respondents and 13% of Chinese respondents rank them as a top three problem. This may be because companies are often unaware that they've been infiltrated by botnets, which is exactly what bot herders are counting on.
Similarly, viruses, worms, and phishing are the top three types of security breaches reported by U.S. respondents. Seventh on the list: identity theft. But that doesn't mean that identity theft isn't a greater threat. Identity theft and fraud are worst-case scenarios for a company whose data has been compromised, but not having experienced them could be as much about luck as it is security. TJX was extremely unlucky in that some of the 45.7 million customer records stolen from its IT systems over the past few years surfaced earlier this year in Florida, where they were used to create fake credit cards and defraud several Wal-Mart stores of millions of dollars. By contrast, the VA, last year's poster child for data insecurity, lost 27 million records when a laptop was stolen from an employee's house, but so far no identity theft or fraud activities have been traced back to that security breach.
Here's another sign that data security is a growing concern: While U.S. respondents measure the value of their security investments first for their ability to cut the number of hours workers spend on security-related issues (43% of respondents), second in priority is how well these measures protect customer records (35%), and third is a decline in the number of breaches (33%).
Perhaps the most surprising stat of the entire survey is that nearly a quarter of U.S. respondents don't measure the value of their security investments at all.
As already mentioned, the most significant impact of cyberattacks is network downtime, followed by business apps, including e-mail, being rendered unavailable. Third on the impact list, as reported by a quarter of U.S. respondents and 41% in China, is information confidentiality being compromised. Fourth is "minor" financial losses, reported by 18% in the United States and 21% in China. They were the lucky ones.
The financial impact of security lapses is difficult to calculate in the short term, particularly when it involves the loss of data. In fact, the highest percentage of respondents admitting to a breach, 35% in the United States and 31% in China, say they don't know the total value of the loss they suffered.
In the long run, though, the security losses can be painfully obvious. TJX reported a $20 million computer intrusion-related charge for its third quarter, ended April 28. The loss to the Florida Wal-Marts: about $8 million in merchandise.
The shadowy underground of malicious hackers and cyberthieves has been responsible for some high-profile breaches over the past 12 months, and concerns over the next strike occupy most security pros' time. More than half of those surveyed cited computer hackers as the source of breaches or espionage at their companies within the past year, and more than a third suspect that malicious coders were responsible. Just as significant, though, breaches by unauthorized users rose in 2007, to 34% from 28% in 2006.
At BryanLGH Medical Center in Lincoln, Neb., CIO Rich Marreel's main security concern is protect- ing the organization's patient data, not just from malicious hackers but from employee misuse, whether intentional or not. "We're always concerned about people sharing their authentication credentials with someone else or with information leaving the organization via laptops or memory sticks," Marreel says. The solution: a combination of employee education and security technology, including encryption.
Without carefully managing user access rights, and demanding that users protect their login and password information, companies introduce "a hidden threat," Marreel says. At the hospital, for example, "a lot of people were writing down logins and passwords and carrying them around or even posting them on their PCs," he says. Haven't we heard this before?
U.S. and Chinese survey responses are similar, but different in many ways. For instance, exploiting known operating system vulnerabilities is the leading method of attack in both countries--43% of respondents in the United States and a whopping two-thirds in China say so. The same disproportionate response applies to the second leading attack method--known application vulnerabilities--where 41% of Chinese respondents' systems were compromised that way, as compared with less than a quarter in the United States. This could be the result of the large amount of pirated software used in China, says Accenture's MacWillson. "They don't have access to the patches," he points out (see story, "China's Evolutionary Leap" ).
Other popular methods of attack cited by respondents include falsified information in e-mail attachments (26% and 25%) and exploiting unknown operating system vulnerabilities (24% and 31%). Such intrusions, however, aren't the only concerns. Of the 804 U.S. respondents admitting to having experienced breaches or espionage in the past 12 months, 18% attribute the problem to unauthorized employees, and 16% suspect authorized users and employees.
But that's down from nearly 25% of companies reporting breaches in 2006. And that's surprising, because there's no getting around the fact that employees are a weak link in the security chain. Gary Min's attempted fleecing of his former employer, chemical company DuPont, included about $400 million worth of company trade secrets that he tried to turn over to a DuPont competitor before that company alerted the FBI. Hiding in plain sight, Min accessed an unusually high volume of abstracts and full-text .pdf documents off DuPont's Electronic Data Library server, one of the company's main databases for storing confidential and proprietary information. Min downloaded about 22,000 abstracts from the EDL and accessed about 16,706 documents--15 times the number of abstracts and reports accessed by the next highest user of the EDL for that period. Min pleaded guilty and faces up to 10 years in prison, a fine of $250,000, and restitution.
Besides outright fraud, employees fail to protect the data they have stored on their corporate IT assets, mainly their laptops. Laptops and portable storage devices are being stolen from employees' cars and homes in mind-boggling numbers. Last month, a backup computer storage device with the names and Social Security numbers of every employee in the state of Ohio--more than 64,000 records--was stolen from a state intern's car. Twelve months earlier, a laptop containing names, addresses, and credit and debit card information of 243,000 Hotels.com customers was stolen from an Ernst & Young employee's car in Texas. "Most of this is human error or bad business process," says Rhonda MacLean, CEO of consulting firm MacLean Risk Partners and former chief security officer at Bank of America and Boeing.
Similarly, only 5% of survey respondents cite contract service providers, consultants, or auditors as the source of their breaches. But that doesn't mean they shouldn't be concerned.
In April, the Georgia Department of Community Health reported the loss of 2.9 million records containing personal information, including full names, addresses, birth dates, Medicaid and children's health care recipient identification numbers, and Social Security numbers, when a computer disk went missing from service provider Affiliated Computer Services, which was contracted to handle health care claims for the state.
"If a partner or service provider has access to any of our data, we want a security paragraph written into our contract that gives us the right to perform a security audit against them and to perform these audits regularly," says Randy Barr, chief security officer of WebEx, a Web-conferencing company. Barr says that all contractors with access to company systems must undergo background checks, a policy since 2004.
You'd think that simply educating employees and partners about your company's security policies would be sufficient to keep generally honest people from letting customer information leak out through e-mails, instant messages, and peer-to-peer networks--but you'd be wrong. Sure, the No. 1 tactical security priority for U.S. companies in 2007, according to 37% of respondents, is creating and enhancing user awareness of policies. But that's down from 42% in 2006. A smaller percentage of U.S. companies also plan to install better access controls, monitoring software, and secure remote access systems. In China, companies are focusing on installing application firewalls, better access controls, and monitoring software.
Only 19% of respondents say that security technology and policy training will have a significant impact on alleviating employee-based security breaches, the same percentage as last year. "It takes more than showing them a few videos," consultant MacLean says. "You have to track employee training and make sure that employees finish with at least the basic understanding of what you want them to know."
Over the past 12 months, the change at Eisenhower Medical Center in Rancho Mirage, Calif., that's had the greatest impact on security is the health care organization's move from a paper-based to an electronic patient records system. "This put more responsibility on us to make sure the patient's data is secure," says CIO David Perez. "And it's not just the movement of the data online but the volume of that data makes it more challenging. A CAT scan a few years ago would provide 250 to 500 images, but our new system can produce up to 5,000 images."
As more and more physicians and medical staff log on to Eisenhower's intranet portal to do their work, Perez and his team must increase their monitoring for security problems and ensure that only the appropriate physicians and staff are accessing different medical records, as required by the Health Insurance Portability and Accountability Act. "Users sign a confidentiality statement when they join the medical center," Perez says. "We'll also post reminders on the employee portal."
Some companies prefer the Big Brother approach. Of the U.S. respondents who say their companies monitor employee activities, 51% monitor e-mail use, 40% monitor Web use, and 35% monitor phone use, roughly consistent with last year's findings. However, other sources of data leakage are given less attention: Only 29% monitor instant messaging use, 22% the opening of e-mail attachments, and 20% the contents of outbound e-mail messages. And only a handful keep a close eye on the use of portable storage devices.
Still, 42% of respondents say data leakage is bad enough that employees should be fined or punished in some way for their role in security breaches, once those employees have been trained. Consultant MacLean takes an even tougher tack: "Termination is pretty severe, but in some cases it's appropriate, as is civil or even criminal prosecution."
A significant number of respondents want to put the responsibility for porous security on the companies selling them security technology. Forty-five percent of U.S. companies and 47% of companies in China think security vendors should be held legally and financially liable for security vulnerabilities in their products and services.
Some of the unease about corporate IT security may stem from the fact that most companies don't have a centralized security executive assessing risks and threats and then calling the shots to address these concerns. The process for setting security policy in most companies is collaborative, and groups comprising the CIO, CEO, IT management, and security management all have input. Eisenhower Medical Center doesn't have a chief information security officer, instead relying on its general counsel to make regulatory compliance decisions, and on CIO Perez, working with system administrators, to set security policy. "We gather information from each director in each department to find out what systems and data they need access to," Perez says. "It's an interesting back and forth. The doctors want easy access, and we're trying to make it more secure."
The number of chief information security officers has grown significantly in the last year. Roughly three-quarters of survey respondents say their companies have CISOs, compared with 39% in 2006. CISOs predominantly report to the CEO or the CIO.
When it comes to the ultimate sign-off, however, half of U.S. companies say that the CEO determines security spending. In the United States, the greatest percentage of respondents, 37%, say their companies assess risks and threats without the input of a CISO, while an astounding 22% say they don't regularly assess security risks and threats at all.
In the United States, the portion of IT budgets devoted to security remains pretty flat; companies plan to spend an average of 12% this year, compared with 13% last year. China, on the other hand, is on a security spending spree: The average percentage of IT budget devoted to security this year is 19%, compared with 16% in 2006. It's interesting to note that 39% of U.S. companies and 55% in China expect 2007 security spending levels to surpass those in 2006.
If it all sounds overwhelming, don't panic. While information security has gotten more complex--as attackers alter both their methods and their targets, and companies layer more and more security products on top of each other--the good news is that the measures required to plug most security holes often come down to common sense, an increasingly important quality to look for in any employee or manager handling sensitive data.
Data Leakage prevention, does it work?
A pair of researchers has discovered multiple types of flaws in various vendors' DLP products that would let an attacker evade them, alter their records of stolen data, and even use them to bot-infect client machines.
At the heart of the problem is the way some DLP products are being designed, the researchers say. Their underlying approach, a sort of honor system, is problematic: "It's like handing a bracelet to all the suspects and saying, 'don't do anything wrong, or we'll catch you,'" says Eric Mondi, a security researcher with Matasano Security who led the firm's research on DLP products, which will be presented at Black Hat USA next month in Las Vegas.
"By the time you've detected your most sensitive information was leaked, the ultimate value of the DLP product is [gone]" and the attacker has copied a dossier of data on your firm's Social Security numbers, says Thomas Ptacek, co-founder and researcher with Matasano Security. "Forget about calling the security team in: It's over. You need to call PR and try to mitigate" the publicity fallout, he says.
Matasano won't name names, but several of the DLP vendors the firm alerted about the bugs -- which include buffer overflows and SQL injection -- are already working on fixes. Even so, enterprises need to know that these tools can backfire if they're not secured or audited, the researchers say.
"DLP is a top line-item for IT," Ptacek says. "A vulnerability in the piece of software that controls hundreds or thousands of machines is a catastrophe... if an attacker can find that vulnerability and take control of it. If it’s not extremely well-audited, there [will be] latent botnet infections on your network."
Some DLP products are especially leaky. The communication between agent and server was weak in many DLP products the researchers tested. "It was mainly ad hoc and weak encryption," notes Ptacek. "This data is the crown jewels of the enterprise, and the communication between agent and server has to be [better] protected."
And with Windows machines, DLP products must embed code into the kernel, which of course opens another can of worms. "The [more] stuff that's loaded into the kernel, the harder it [the DLP product] is to evade," Ptacek says. "But it also exposes more vulnerabilities to the kernel itself."
Monti says he and Ptacek will demonstrate at Black Hat a fictional DLP product that basically combines typical DLP features (in addition to the common bugs) to illustrate the risks of these tools. The goal of the researchers is to make organizations aware of these vulnerabilities in DLP products, and to know how to spot them.
"One of the things we advocate is that they do their homework on all regulatory compliance criteria," Monti says. "If they are using DLP products that don't comply to that level... they are actually failing compliance, because they are using this security product," Monti notes.
Cory Scott, vice president of global research, guidance, and consulting at ABN-AMRO, says security tools shouldn't introduce any risk to the enterprise. "The technology is only as good as the implementation," says Scott, who was speaking independently, not on behalf of ABN-AMRO. "In the cases of the vendors that Tom and company looked at, it appears as if the development and design practices were lacking. Think of the Hippocratic oath: You don't want the cure to be worse than the disease."
Still, although these products won't stop a determined attacker, Scott says, if you properly vet and audit them, you're practicing due diligence in protecting your data as well as that of your customers.
At the heart of the problem is the way some DLP products are being designed, the researchers say. Their underlying approach, a sort of honor system, is problematic: "It's like handing a bracelet to all the suspects and saying, 'don't do anything wrong, or we'll catch you,'" says Eric Mondi, a security researcher with Matasano Security who led the firm's research on DLP products, which will be presented at Black Hat USA next month in Las Vegas.
"By the time you've detected your most sensitive information was leaked, the ultimate value of the DLP product is [gone]" and the attacker has copied a dossier of data on your firm's Social Security numbers, says Thomas Ptacek, co-founder and researcher with Matasano Security. "Forget about calling the security team in: It's over. You need to call PR and try to mitigate" the publicity fallout, he says.
Matasano won't name names, but several of the DLP vendors the firm alerted about the bugs -- which include buffer overflows and SQL injection -- are already working on fixes. Even so, enterprises need to know that these tools can backfire if they're not secured or audited, the researchers say.
"DLP is a top line-item for IT," Ptacek says. "A vulnerability in the piece of software that controls hundreds or thousands of machines is a catastrophe... if an attacker can find that vulnerability and take control of it. If it’s not extremely well-audited, there [will be] latent botnet infections on your network."
Some DLP products are especially leaky. The communication between agent and server was weak in many DLP products the researchers tested. "It was mainly ad hoc and weak encryption," notes Ptacek. "This data is the crown jewels of the enterprise, and the communication between agent and server has to be [better] protected."
And with Windows machines, DLP products must embed code into the kernel, which of course opens another can of worms. "The [more] stuff that's loaded into the kernel, the harder it [the DLP product] is to evade," Ptacek says. "But it also exposes more vulnerabilities to the kernel itself."
Monti says he and Ptacek will demonstrate at Black Hat a fictional DLP product that basically combines typical DLP features (in addition to the common bugs) to illustrate the risks of these tools. The goal of the researchers is to make organizations aware of these vulnerabilities in DLP products, and to know how to spot them.
"One of the things we advocate is that they do their homework on all regulatory compliance criteria," Monti says. "If they are using DLP products that don't comply to that level... they are actually failing compliance, because they are using this security product," Monti notes.
Cory Scott, vice president of global research, guidance, and consulting at ABN-AMRO, says security tools shouldn't introduce any risk to the enterprise. "The technology is only as good as the implementation," says Scott, who was speaking independently, not on behalf of ABN-AMRO. "In the cases of the vendors that Tom and company looked at, it appears as if the development and design practices were lacking. Think of the Hippocratic oath: You don't want the cure to be worse than the disease."
Still, although these products won't stop a determined attacker, Scott says, if you properly vet and audit them, you're practicing due diligence in protecting your data as well as that of your customers.
Ransomware is back
Ransomware last seen in 2006 has reappeared to encrypt files and extort $300 from its victims, according to Russian security researcher.
GpCode, a Trojan program which last appeared in the wild last summer, has popped up again, said Aleks Gostev, senior virus analyst with Moscow-based Kaspersky Lab, in a posting to the research centre's blog.
Noting the long quiet time, Gostev added: "So you can imagine our feelings this weekend, when some of our non-Russian users told us their documents, photos, archive files etc. had turned into a bunch of junk data, and a file called 'read_me.txt' had appeared on their systems."
The text file contained the "ransom" note.
"Hello, your files are encrypted with RSA-4096 algorithm. You will need at least few years to decrypt these files without our software. All your private information for last 3 months were collected and sent to us. To decrypt your files you need to buy our software. The price is $300."
So-called ransomware typically follows the GpCode pattern: malware sneaks onto a PC, encrypts files, and then displays a message demanding money to unlock the data.
Gostev hinted that the blackmailer was likely Russian. "The email address is one that we've seen before in LdPinch and Banker [Trojan horse] variants, programs which were clearly of Russian origin," he said.
The blackmailer's claim that the files were enciphered with RSA-4096 -- the RSA algorithm locked with a 4,096-bit key - is bogus, said Gostev. Another oddity, he added, was that the Trojan has a limited shelf life: from 10 July 10 to 15 July.
"Why? We can only guess," said Gostev.
Kaspersky is working on a decryption scheme to recover the files; that process has been the usual salvation - and solution - for users attacked by ransomware. "[But] we'd just like to remind you, if you've fallen victim to any type of ransomware, you should never pay up under any circumstances.
"Contact your anti-virus provider, and make sure you back up your data on a regular basis."
GpCode, a Trojan program which last appeared in the wild last summer, has popped up again, said Aleks Gostev, senior virus analyst with Moscow-based Kaspersky Lab, in a posting to the research centre's blog.
Noting the long quiet time, Gostev added: "So you can imagine our feelings this weekend, when some of our non-Russian users told us their documents, photos, archive files etc. had turned into a bunch of junk data, and a file called 'read_me.txt' had appeared on their systems."
The text file contained the "ransom" note.
"Hello, your files are encrypted with RSA-4096 algorithm. You will need at least few years to decrypt these files without our software. All your private information for last 3 months were collected and sent to us. To decrypt your files you need to buy our software. The price is $300."
So-called ransomware typically follows the GpCode pattern: malware sneaks onto a PC, encrypts files, and then displays a message demanding money to unlock the data.
Gostev hinted that the blackmailer was likely Russian. "The email address is one that we've seen before in LdPinch and Banker [Trojan horse] variants, programs which were clearly of Russian origin," he said.
The blackmailer's claim that the files were enciphered with RSA-4096 -- the RSA algorithm locked with a 4,096-bit key - is bogus, said Gostev. Another oddity, he added, was that the Trojan has a limited shelf life: from 10 July 10 to 15 July.
"Why? We can only guess," said Gostev.
Kaspersky is working on a decryption scheme to recover the files; that process has been the usual salvation - and solution - for users attacked by ransomware. "[But] we'd just like to remind you, if you've fallen victim to any type of ransomware, you should never pay up under any circumstances.
"Contact your anti-virus provider, and make sure you back up your data on a regular basis."
Hackers steal Government data
Hackers stole information from the U.S. Department of Transportation and several U.S. corporations by seducing employees with fake job-listings on ads and e-mail, a computer security firm said on Monday.
The list of victims included several companies known for providing security services to government agencies.
They include consulting firm Booz Allen, computer services company Unisys Corp., defense contractor L-3 communications, computer maker Hewlett-Packard Co. and satellite network provider Hughes Network Systems, a unit of Hughes Communications Inc., said Mel Morris, chief executive of British Internet security provider Prevx Ltd.
Hewlett-Packard declined comment, while officials with other companies couldn't be reached for comment. A Department of Transportation spokeswoman said the agency couldn't find any indication of a security breach.
Malicious programs were able to pass sophisticated security systems undetected because that software hadn't been instructed that they were dangerous. Hackers only targeted a limited group of personal computers, which kept traffic down and allowed them to stay under the radar of security police who tend to identify threats when activity reaches a certain level.
"What is most worrying is that this particular sample of malware wasn't recognized by existing antivirus software. It was able to slip through enterprise defenses," said Yankee Group security analyst Andrew Jaquith, who learned of the breach from Morris.
It was not clear whether the hackers used information stolen from the personal computers, Morris said.
Internet security firms began to release patches to fight the malicious software on Monday night.
Trend Micro, for example, has sent its customers software that prevents the malware from being installed on computers. It also blocks browsers from going to Web sites that the company has identified as being infected with the dangerous programs, said company spokesman Mike Haro.
"This is a serious threat. It shows how sophisticated hackers have become," Haro said.
A piece of software, NTOS.exe, probes the PC for confidential data, then sends it to a Web site hosted on Yahoo Inc.. That site's owner is likely unaware that it is being used by hackers, Morris said.
That Web site hosts data that had been stolen from more than 1,000 PCs and encrypted before it was posted on the site, according to Morris.
He said that he believes the hackers have set up several "sister" Web sites that are collecting similar data from other squadrons of malware.
Officials with Yahoo weren't available for comment.
Morris said that he had downloaded the data from the Web site and decrypted it at the request of investigators from the FBI's Law Enforcement Online, or LEO, program, who were looking into the matter.
An FBI spokesman declined comment, saying it is agency policy to neither confirm nor deny whether an investigation is ongoing.
The list of victims included several companies known for providing security services to government agencies.
They include consulting firm Booz Allen, computer services company Unisys Corp., defense contractor L-3 communications, computer maker Hewlett-Packard Co. and satellite network provider Hughes Network Systems, a unit of Hughes Communications Inc., said Mel Morris, chief executive of British Internet security provider Prevx Ltd.
Hewlett-Packard declined comment, while officials with other companies couldn't be reached for comment. A Department of Transportation spokeswoman said the agency couldn't find any indication of a security breach.
Malicious programs were able to pass sophisticated security systems undetected because that software hadn't been instructed that they were dangerous. Hackers only targeted a limited group of personal computers, which kept traffic down and allowed them to stay under the radar of security police who tend to identify threats when activity reaches a certain level.
"What is most worrying is that this particular sample of malware wasn't recognized by existing antivirus software. It was able to slip through enterprise defenses," said Yankee Group security analyst Andrew Jaquith, who learned of the breach from Morris.
It was not clear whether the hackers used information stolen from the personal computers, Morris said.
Internet security firms began to release patches to fight the malicious software on Monday night.
Trend Micro, for example, has sent its customers software that prevents the malware from being installed on computers. It also blocks browsers from going to Web sites that the company has identified as being infected with the dangerous programs, said company spokesman Mike Haro.
"This is a serious threat. It shows how sophisticated hackers have become," Haro said.
A piece of software, NTOS.exe, probes the PC for confidential data, then sends it to a Web site hosted on Yahoo Inc.. That site's owner is likely unaware that it is being used by hackers, Morris said.
That Web site hosts data that had been stolen from more than 1,000 PCs and encrypted before it was posted on the site, according to Morris.
He said that he believes the hackers have set up several "sister" Web sites that are collecting similar data from other squadrons of malware.
Officials with Yahoo weren't available for comment.
Morris said that he had downloaded the data from the Web site and decrypted it at the request of investigators from the FBI's Law Enforcement Online, or LEO, program, who were looking into the matter.
An FBI spokesman declined comment, saying it is agency policy to neither confirm nor deny whether an investigation is ongoing.
Tuesday, July 10, 2007
Pain in the BOT
From a technology standpoint, bots are pretty neat. These elusive applications can wiggle into a vulnerable computer, communicate secretly with its control center, download active code snippets for a specific attack, and evade the latest in layered security mechanisms. Unfortunately, the instructions are usually to blast out spam, launch a DOS attack, or steal secrets from resources the bot has access to. The business plan for bots is also a winner: Only a low percentage of bots has to deliver for the attack to be successful, the distribution costs are negligible, and the risk of prosecution of the people behind the attack is close to zero.
One of the perplexing problems with bots is figuring out how to remove them and who has incentive to solve the removal problem. It is pretty clear that endpoint security products have been ineffective, perhaps because users don't have them deployed and configured properly. But it doesn’t matter, because security is still not doing the job. Personal firewall features are not catching command and control communications. Signature checking is faked out by the polymorphic nature of code downloads. Attacks are seldom thwarted. Nobody seems able to eradicate bots from a machine and to keep them from coming back.
But two vendors are coming out with new anti-bot tools. Symantec Corp. (Nasdaq: SYMC - message board) is beta-testing its anti-bot endpoint software solution, Symantec Anti-bot (with Sana Security Inc. contributing technology). The software notices when a bot changes its executable as a prelude to an attack. Anti-bot acts to restore the executable to a clean state before any attack can be launched. This ability to repair an infected machine is pretty interesting. Still, I'm not sure that the average consumer is going to buy an additional product to relieve the pain of bots when he is more worried about identity theft and malware forcing him to rebuild his computer. Let’s hope that Symantec comes to its senses and folds anti-bot protection into its Norton Endpoint Protection package for consumers.
Mi5 Networks Inc. , meanwhile, believes that enterprises will fight bots to reduce the risk of excessive cleanup costs resulting from infected networks and endpoints. Mi5 is now shipping Webgate, an appliance that seeks out the command and control communications lifeline that active bots require. Mi5 looks at all protocols on the wire to identify scanning and “phone home” activity from bot-infected machines. When found, the machine can either be automatically cleaned with a software agent, or IT folks can roll up their sleeves and manually eliminate the bot. Sometimes bots are only in contact every couple of months, but enterprises still should be encouraged to tackle bots to protect confidential data and to keep the business infrastructure flowing smoothly. (See Mi5's Not-So-Secret Weapon.)
In many ways, bots are virtual machines that are designed to launch attack applications in a protected environment. From a security standpoint, bots are like most malware in that they modify the PC configuration, use the network in inappropriate ways, and propagate to other vulnerable machines. Service providers are not going to help solve the problem, and some sort of anti-spam service approaches doesn't seem to be on the horizon. That means the only alternatives are to bolster endpoint security software for consumers, and add more intelligence to the network for enterprises. Right now, the bots are clearly winning.
One of the perplexing problems with bots is figuring out how to remove them and who has incentive to solve the removal problem. It is pretty clear that endpoint security products have been ineffective, perhaps because users don't have them deployed and configured properly. But it doesn’t matter, because security is still not doing the job. Personal firewall features are not catching command and control communications. Signature checking is faked out by the polymorphic nature of code downloads. Attacks are seldom thwarted. Nobody seems able to eradicate bots from a machine and to keep them from coming back.
But two vendors are coming out with new anti-bot tools. Symantec Corp. (Nasdaq: SYMC - message board) is beta-testing its anti-bot endpoint software solution, Symantec Anti-bot (with Sana Security Inc. contributing technology). The software notices when a bot changes its executable as a prelude to an attack. Anti-bot acts to restore the executable to a clean state before any attack can be launched. This ability to repair an infected machine is pretty interesting. Still, I'm not sure that the average consumer is going to buy an additional product to relieve the pain of bots when he is more worried about identity theft and malware forcing him to rebuild his computer. Let’s hope that Symantec comes to its senses and folds anti-bot protection into its Norton Endpoint Protection package for consumers.
Mi5 Networks Inc. , meanwhile, believes that enterprises will fight bots to reduce the risk of excessive cleanup costs resulting from infected networks and endpoints. Mi5 is now shipping Webgate, an appliance that seeks out the command and control communications lifeline that active bots require. Mi5 looks at all protocols on the wire to identify scanning and “phone home” activity from bot-infected machines. When found, the machine can either be automatically cleaned with a software agent, or IT folks can roll up their sleeves and manually eliminate the bot. Sometimes bots are only in contact every couple of months, but enterprises still should be encouraged to tackle bots to protect confidential data and to keep the business infrastructure flowing smoothly. (See Mi5's Not-So-Secret Weapon.)
In many ways, bots are virtual machines that are designed to launch attack applications in a protected environment. From a security standpoint, bots are like most malware in that they modify the PC configuration, use the network in inappropriate ways, and propagate to other vulnerable machines. Service providers are not going to help solve the problem, and some sort of anti-spam service approaches doesn't seem to be on the horizon. That means the only alternatives are to bolster endpoint security software for consumers, and add more intelligence to the network for enterprises. Right now, the bots are clearly winning.
FIX (financial trading) protocol flawed and hackable
You'd think electronic financial trading would be extra secure, but not so much: One of the most popular application-layer protocols in the financial industry leaves these money applications wide open to attack, according to researchers.
The application-layer FIX (financial information exchange) protocol is used by financial services firms, stock exchanges, and investment banks for automated financial trading. But apps written to the protocol can be vulnerable to denial-of-service, session hijacking, and man-in-the middle attacks over the Internet, as well as an attacker actually able to "watch" the transactions, says David Goldsmith, CEO of Matasano Security, who will present the firm's new research on FIX at the upcoming Black Hat USA briefings later this month.
Goldsmith says he can't divulge details on the specific vulnerabilities Matasano found in applications deploying FIX, as well as other financial industry-specific protocols, but the bottom line is that these protocols weren't built with security in mind. "For the most part, when you look under the hood of these protocols, we find almost no means of security," he says. The FIX spec, for instance, barely touches on how to secure data as it travels over the Internet.
And most apps that use FIX are written in C and C++, he notes, "which is not always super well-audited code."
FIX has no session-layer encryption built into it, so it isn't easy to encrypt the sessions. "So most people encrypt using external devices like VPNs or tools like 'stunnel,'" Goldsmith says. Although the FIX protocol was updated in the past year to free FIX apps from that session layer, he says, most of these apps are still running the FIX session layer.
And many FIX-enabled financial apps don't even use passwords for their sessions, mainly because the apps were originally mostly built for use internally for a private connection between business partners -- rather than over the Internet, which is increasingly becoming the preferred method.
Plenty of financial firms may be at risk of these types of attacks. According to the FIX Protocol Website, 75 percent of buy-side and 80 percent of sell-side financial services firms use FIX for electronic trading, with both types of organizations planning to expand FIX, according to a survey taken by TowerGroup. The site says more than three fourths of all financial exchanges surveyed support FIX in their applications, and that most major stock exchanges and investment banks use FIX for their electronic trading. Mutual fund, money manager, and small investment firms also deploy FIX.
Unlike credit-card theft, which ultimately can be stopped before causing much financial damage, an attack on FIX could be silent and deadly: "If a hacker was monitoring or viewing [the transactions], you may never know they are there," Goldsmith says. "[He] could take that information and use it to their advantage for insider trading... or to cause significant financial damage."
So what should financial institutions do in the meantime to protect themselves from attackers that hone in on financial protocols? Goldsmith says start by taking a look at applications "you haven't looked at in a while... When was the last time you changed passwords on applications built on FIX?"
"Even doing basic due-diligence goes a long way," he says. "It's very easy to treat these as internal apps and to not consider all the security ramifications. But these apps need to be treated very seriously."
And security tools really don't help here, Goldsmith says, although strong firewalling and external session-layer encryption are helpful. "You're not going to find that the IDSes of today are supporting FIX, or vulnerability scanners are finding FIX vulns," he says. "It's a little more narrow market."
Plus it's tough for researchers to even gain access to FIX-based systems to study their weaknesses since you can't just take down a financial trading app to test it, for instance. "These systems cannot be [taken] offline," he says.
FIX is just the tip of the iceberg for financial protocols at risk, however. "We'll be tightly focused on FIX" in the Black Hat talk, entitled "Hacking Capitalism," Goldsmith says. "But there's more to talk about."
The application-layer FIX (financial information exchange) protocol is used by financial services firms, stock exchanges, and investment banks for automated financial trading. But apps written to the protocol can be vulnerable to denial-of-service, session hijacking, and man-in-the middle attacks over the Internet, as well as an attacker actually able to "watch" the transactions, says David Goldsmith, CEO of Matasano Security, who will present the firm's new research on FIX at the upcoming Black Hat USA briefings later this month.
Goldsmith says he can't divulge details on the specific vulnerabilities Matasano found in applications deploying FIX, as well as other financial industry-specific protocols, but the bottom line is that these protocols weren't built with security in mind. "For the most part, when you look under the hood of these protocols, we find almost no means of security," he says. The FIX spec, for instance, barely touches on how to secure data as it travels over the Internet.
And most apps that use FIX are written in C and C++, he notes, "which is not always super well-audited code."
FIX has no session-layer encryption built into it, so it isn't easy to encrypt the sessions. "So most people encrypt using external devices like VPNs or tools like 'stunnel,'" Goldsmith says. Although the FIX protocol was updated in the past year to free FIX apps from that session layer, he says, most of these apps are still running the FIX session layer.
And many FIX-enabled financial apps don't even use passwords for their sessions, mainly because the apps were originally mostly built for use internally for a private connection between business partners -- rather than over the Internet, which is increasingly becoming the preferred method.
Plenty of financial firms may be at risk of these types of attacks. According to the FIX Protocol Website, 75 percent of buy-side and 80 percent of sell-side financial services firms use FIX for electronic trading, with both types of organizations planning to expand FIX, according to a survey taken by TowerGroup. The site says more than three fourths of all financial exchanges surveyed support FIX in their applications, and that most major stock exchanges and investment banks use FIX for their electronic trading. Mutual fund, money manager, and small investment firms also deploy FIX.
Unlike credit-card theft, which ultimately can be stopped before causing much financial damage, an attack on FIX could be silent and deadly: "If a hacker was monitoring or viewing [the transactions], you may never know they are there," Goldsmith says. "[He] could take that information and use it to their advantage for insider trading... or to cause significant financial damage."
So what should financial institutions do in the meantime to protect themselves from attackers that hone in on financial protocols? Goldsmith says start by taking a look at applications "you haven't looked at in a while... When was the last time you changed passwords on applications built on FIX?"
"Even doing basic due-diligence goes a long way," he says. "It's very easy to treat these as internal apps and to not consider all the security ramifications. But these apps need to be treated very seriously."
And security tools really don't help here, Goldsmith says, although strong firewalling and external session-layer encryption are helpful. "You're not going to find that the IDSes of today are supporting FIX, or vulnerability scanners are finding FIX vulns," he says. "It's a little more narrow market."
Plus it's tough for researchers to even gain access to FIX-based systems to study their weaknesses since you can't just take down a financial trading app to test it, for instance. "These systems cannot be [taken] offline," he says.
FIX is just the tip of the iceberg for financial protocols at risk, however. "We'll be tightly focused on FIX" in the Black Hat talk, entitled "Hacking Capitalism," Goldsmith says. "But there's more to talk about."
Eight ways to beat a security audit
You might have your access control process fixed, but you probably haven't adequately trained your administrators on how to manage it. You might have your configuration and change control systems in place, but you probably haven't sufficiently documented the process for using them. If you've adopted strict security policies, your users likely have found a way of avoiding or bypassing them altogether.
Make no mistake -- auditors will find fault with your systems, your processes, and the people who operate them. They're auditors. It's their job.
If you only knew the most common reasons for audit failure in advance, so that you could double-check your environment and fix those potential deal-busters before the auditor comes in. If you only had some tips from experts who have "been there" on how to shore up your environment to beat an audit.
Hey, wait a minute, that's what's in this article!
The following are eight tips offered by auditors, consultants, and others who have been through the IT security audit mill on what to look for in a compliance audit and how to beat those problems before an auditor fails you on them. It's not a comprehensive "cheat sheet," but it might give you some ideas on why companies fail their audits, and what you can do to avoid the same pitfalls.
If you have any ideas or tips that we've overlooked here, please post them to the message board attached to this article. We'd love to hear about your experiences with compliance audits -- and what you'd do differently if you had them to do all over again.
Contents:
* Page 2: Establish a consistent set of practices for change management
* Page 3: Keep your app developers away from production/operations
* Page 4: Give users access only to the data and apps they need
* Page 5: Shore up physical access to your systems
* Page 6: Establish methods to detect security anomalies -- and where they come from
* Page 7: Map your security processes to real business processes
* Page 8: Double (and triple) check your accounting processes
* Page 9: Document your work and train your users on what you've done
Next Page: Establish a consistent set of practices for change management
Eight Sure-Fire Ways to Beat a Security Audit
1. Establish a consistent set of practices for change management
JULY 9, 2007 | There's no such thing as a static IT environment. If you're not properly and consistently keeping track of changes in your organization, you've got a big fat problem. And the lack of a formal change management process could earn you a big fat "F" on your audit report.
Security audit experts say you need a formal document and change procedure, as well as oversight on changes -- Joe in accounting is now working from home instead of the office -- and reviews of your change logs. And you'd better know about that user who was recently fired, so you can immediately disable his account in case he has revenge in mind.
"Three years ago, companies had really poor change management, but today, their change management process [is] moving toward automation," says Paul Proctor, a research vice president at Gartner. "But there's still a hefty number of them that don't have any change management" at all, he says.
Auditors are typically tough on change management. IBM recommends documenting change-management policies and procedures and updating them regularly; reviewing, analyzing, and approving change requests; and testing changes before you make them, according to Robin Hogan, program manager for IBM governance and risk management.
But monitoring change isn't as easy as it sounds: "I've seen tables full of minutes from change-board meetings, forms completed appropriately -- but no evidence that the actual change itself was appropriately implemented, or even implemented at all," Hogan says. The key is an automated change management system that tracks what changes were made and by whom, then matches them to specific systems, she says.
Proctor says change management is more of process control issue than a technological one. Gartner recommends "change reconciliation," where you use tools like Tripwire and database monitoring to automatically detect any changes to data or files -- and then cross-check them with authorized changes.
"If you then go back to the CMDB [change management database] and reconcile things you detected with authorized change requests," that's change reconciliation, Proctor says. "This is to address auditor concerns to prove that nothing happened that shouldn't have" to the data.
But organizations have a ways to go on the reconciliation side -- Proctor doesn't expect it to become a regular part of the change management process for another four or five years. "The problem is you have to have tightly controlled change management, and every time you detect it, you have to go back and reconcile it."
Next Page: Keep your app developers away from production/operations
Eight Sure-Fire Ways to Beat a Security Audit
2. Keep your app developers away from production/operations
JULY 9, 2007 | With many large organizations outsourcing their IT operations and software development, a clean separation between your application developers and your operational, production systems is more crucial than ever.
"Application developers should not have access to the production environment," says Kris Lovejoy, IBM's director of strategy for governance and risk management.
By testing code in the operational environment, developers can either slow or disrupt business operations. With so many companies using third parties to develop their custom internal apps, the production environment can be extremely vulnerable, Lovejoy says. If operations and development aren't adequately segregated, auditors will be crawling all over you, she says.
And beware of leaving IT with indiscriminate access to systems and databases: "That includes giving a developer or programmer or database administrator access to a system that is completely unmonitored and uncontrolled to a production environment or production data," Lovejoy says. "This is why we see the market turning to identity and access management systems."
Proctor says this lack of a separation of duties is a major problem in organizations today: Anywhere from 60 to 70 percent of Gartner's clients give their developers access to production code, and about 25 percent of its clients provide all of their administrators access to everything. "This is more the case for smaller businesses, and they are getting hammered in their audits there. Auditors are stepping in and saying 'you need to fix this.' "
"The main reason [this problem] exists is because it got baked into the way companies do business. Their developers are the ones who put the operations code in place, and everybody has access to everything." This is a prime example of where the least-privilege approach to authorization would come in handy, he says.
It's all about separation of duties, says David Smith, senior regulatory compliance analyst for Symantec. A better approach would be to require a manager to approve what each IT person or individual user can do. "[Sarbanes-Oxley] auditors focus heavily on separation of duties."
"Any person who just does one thing on a system... can execute on that one system." But organizations may have to tweak the way their IT resources operate today, he says.
Next Page: Give users access only to the data and apps they need
Eight Sure-Fire Ways to Beat a Security Audit
3. Give users access only to the data and apps they need
JULY 9, 2007 | Access control is even more crucial when it comes to end users, who definitely don't need access to everything. Getting a handle on this aspect of compliance requires knowing who (and where) your users are, and what privileges they have.
IBM's Lovejoy says a lack of user access control is one of the top reasons companies fail an audit. "This is the inability to provision users effectively and administer their accounts... And take into account any changes in responsibility or to identify and revoke privileges when a user is terminated."
User access control is closely related to change management. One of the first steps in good change management is keeping people out of places where they shouldn't be, Gartner's Proctor says. "Everybody recognizes that it's well-intentioned people who cause most of the downtime and [security] problems... Any type of change in the system is a time a flaw can be introduced."
Identity and access management are the goal, Proctor says. "Most audit findings today say they either want you to control who has access to what, or be able to report on who has access to what," he says. A first-line manager should sign off on whether a particular user should have access to a particular system, he says.
Manual and homegrown provisioning just doesn't cut it anymore. "Now that you've got all these identity audit requirements, you need to be going with a tool for it," Proctor says. That means deploying a tool that automates -- and tracks -- the process for you, he says.
User access control shouldn't just be a gatekeeper function, either, Symantec's Smith says. If you think your passwords don't need to be as strong internally, think again: "It's easy [for an attacker] to plug in a laptop in the lobby of the building."
Next Page: Shore up physical access to your systems
Eight Sure-Fire Ways to Beat a Security Audit
4. Shore up physical access to your systems
JULY 9, 2007 | What, no biometric scanner on your data center door?
Physical access control to sensitive systems and equipment seems like a no-brainer, especially when you're preparing for an audit. But how much control do you need -- and how do you manage it?
Whether you need locks, sign-in sheets, fingerprint scans, or smart cards depends on the number of people who need access, as well as the level of sensitivity of the data. Gartner recommends deploying the minimum physical controls: "If only two people have access to a room, a lock and key work just fine -- just give the two of them the key," Gartner's Proctor says. "You don't need smart cards then."
If there are dozens of IT people who need access to a system, then it's time to look at a sign-in/out log, or smart cards, which also track dates and times of access.
Gartner suggests publishing your physical-access control policies, including who's allowed where. Enforcement is also important -- with pre-defined consequences -- as well as educating employees on these policies.
Some data-sensitive areas may require multi-factor authentication (think retinal scans), proximity cards, and video surveillance. This tightly controlled physical security should be driven by your business requirements, not by fears of an audit, according to Gartner.
The physical security problem is often exacerbated because IT security people and physical security people don't communicate, and may even work at cross-purposes. "There's a lot less synergy there than you'd expect," Proctor says.
Next Page: Establish methods to detect security anomalies -- and where they come from
Eight Sure-Fire Ways to Beat a Security Audit
5. Establish methods to detect security anomalies -- and where they come from
JULY 9, 2007 | If you can't monitor it, you can't manage it, the old adage goes, and this is certainly true when it comes to security compliance. One of the first things that auditors will ask your company is how it knows when someone -- either inside or outside the company -- is tampering with sensitive data. You need to be able to not only answer this question, but demonstrate it onscreen.
Until recently, most companies did their monitoring through some combination of real-time systems -- such as intrusion prevention systems (IPS) or security information management (SIM) tools -- and recursive analysis of log files to show who accessed which files, and when. SIM tools, in particular, have become a popular method of showing auditors security-related events in the enterprise, and what steps have been taken to prevent unauthorized access.
"Most of the time, organizations already have controls and policies in place," says Indy Chakrabarti, group product manager at Symantec, which makes SIM tools. "What they need is a way to lower the cost of compliance and enforcement, and that's what our tools are designed to do." (See A Multitude of SIMs.)
But a new class of vendors and products is also emerging for "compliance management," an idea which is sometimes expanded to include IT governance, risk, and compliance (GRC) management. These products are designed, in part, to monitor all the pieces of policy management and compliance, and warn enterprises when they are about to fall out of line with regulations or policies.
"We look at this as an opportunity to translate business requirements into IT activities and metrics that can be measured," says IBM's Lovejoy. "Security and compliance are an important part of that, but so are business resilience and service management."
Other vendors prefer to focus primarily on the compliance piece. "In most cases, the CXO is not interested in looking only at security events," says Dean Coza, director of product marketing at ArcSight. "They want to track new compliance problems, and do some baselining on how the organization is performing against policies and controls. A roles-based approach helps the company monitor not just how its systems are doing, but how its people are doing."
Whatever you decide about monitoring tools, you need to be sure that they can demonstrate to the auditor that your organization can track who is accessing sensitive data -- and can alert the troops when unauthorized access is taking place. If you have those systems in place and tested before the auditor arrives, you'll be a leg up when the audit begins.
Next Page: Map your security processes to real business processes
Eight Sure-Fire Ways to Beat a Security Audit
6. Map your security processes to real business processes
JULY 9, 2007 | When you're preparing for an audit, it's important to remember that the auditor's job is to find out whether your organization has sprung any security leaks -- and the audit process may differ from organization to organization. A compliance audit is not like a home inspection, where the inspector typically works from a checklist that you can review in advance. A flaw that might be overlooked in one business might cause another business to fail its audit.
As a result, IT and security organizations should resist the temptation to measure their compliance efforts against a pre-written "checklist" of compliance issues that can be crossed off like a grocery list. Many companies that take this approach are disappointed to learn that they've failed -- because while they have met the "letter" of compliance, they haven't considered its "spirit" -- the prevention of leaks that might hurt employees, customers, or investors.
Organizations often "try come to up with [an audit] checklist, versus looking at their business process," says Symantec's Smith. "Organizations need to be able to produce evidence that's useful to the auditor. If you do risk-based control management effectively, you can reduce the audit cycle, understand the questions they are going to ask, and be prepared."
Auditors typically ask a lot of questions about the business -- how it operates, who has access to information, and which data is the most sensitive, experts say. In regulatory environments where IT compliance requirements are vague, such as SOX or GLBA, the auditor's evaluation of your organization's compliance will depend on your ability to prove that you are protecting your most sensitive data during the course of day-to-day business -- not on a cookie-cutter list of compliance requirements.
If your security policy is effective and fits with the ebb and flow of information inside and outside your organization, you've got a good chance at passing your audit, experts say. But if you focus your efforts on the auditor's requirements, rather than business requirements, you may paradoxically find yourself on the wrong end of the auditor's pen.
Next Page: Double (and triple) check your accounting processes
Eight Sure-Fire Ways to Beat a Security Audit
7. Double (and triple) check your accounting processes
JULY 9, 2007 | One of the myths about SOX compliance is that it's all about proving the security of the organization's IT systems, experts say. But in fact, it's all about ensuring that a public company's financial data isn't tampered with -- from inside or out.
"What we've seen recently is that nearly half of the compliance deficiencies that companies encounter are on the accounting side, while less than 5 percent are IT systems related," said John Pescatore, vice president and distinguished analyst at Gartner, at the company's security summit in Washington, D.C. last month. If your organization fails its SOX audit, it's more likely to be a flaw in the way accounting is handled than anything to do with IT, he said. (See Security's Sea Change.)
Just a few weeks ago, the Public Company Accounting Oversight Board (PCAOB) -- a private, nonprofit entity that gives guidance to the many auditors who evaluate SOX compliance -- changed its guidelines to reflect more real-world threats around company financials, and softened some of the rules surrounding less-likely methods for tampering with financial data. (See New Rules May Ease SOX Audits.)
"[The PCAOB is] saying, 'let's stop and think about this,' " says Patrick Taylor, CEO of Oversight, which makes software for analyzing the accuracy and security of financial transactions. "Most financial fraud is going to occur in a rush, right at the end of a reporting period, when the company finds out that it's going to have some problems with its numbers," he says. "Those are going to be changes that somebody makes to the general ledger, which are relatively easy to detect.
"Contrast that with, say, backup," Taylor explains. "To commit financial fraud through a backup system, you'd have to gain access to the backup data, and then you'd have to have the knowledge to alter it. Then you'd somehow have to crash the operational systems so that the backup data would be put in place. That's a lot more complex, and a lot less likely, than making simple changes in the general ledger. And the audit process should reflect that."
Under the revised PCAOB guidelines, auditors will have the freedom to focus their attention on the transaction paths that could most likely lead to fraud, instead of auditing every possible transaction path to financial data. That means that most SOX audits will be much more heavily weighted toward accounting systems and practices, and scrutiny of the enterprise-wide IT security platform will likely be reduced, Taylor suggests.
The new rules might lighten the burden on IT, but they won't necessarily lessen the subjective nature of audits for regulations such as SOX and HIPAA, which leave a great deal of room for interpretation, says Chris Davis, manager of compliance knowledge management at Cybertrust.
"We'll get a lot more specificity on the business requirements, but not on the IT requirements," Davis suggests.
Next Page: Document your work and train your users on what you've done
Eight Sure-Fire Ways to Beat a Security Audit
8. Document your work and train your users on what you've done
JULY 9, 2007 | Compliance audits are like swinging a golf club, experts say: If you fail to follow through, you'll end up in the weeds.
Many auditors agree that two of their most common reasons for failing a company's compliance efforts are poor documentation and poor training programs. The best security policies and practices can still fail an audit if there is no clear system for implementing and enforcing them, they say.
"I've failed companies that passed 99 percent of the requirements but didn't do their training or documentation correctly," said Nigel Tranter, a partner at Payment Software Co., a leading Payment Card Industry Data Security Standard (PCI DSS) auditing firm, in an interview last year. (See Retailers Lag on Security Standard.)
Most auditors start their evaluations by reading the documentation of an organization's security efforts, experts say. Poor documentation -- or no documentation on some aspect of the compliance initiative -- is like holding a red cape in front of a bull, even if the technology and practices are working well.
Similarly, if the effort to train administrators and users on compliance is perceived to be weak, the audit worm can turn, according to those familiar with the process.
The key to good documentation and training is to constantly monitor and review them, and keep them updated as compliance-related changes are made in systems and practices, experts say. In a study conducted last year by the IT Policy Compliance Group, the companies rated "best in class" generally were companies that checked themselves for compliance every 21 days or less; many of the laggards do a self-audit only once or twice a year.
"What that says is that to be successful in compliance, you've got to find a way to do some automated monitoring," said Jim Hurley, managing director of the IT Policy Compliance Group and a research director at Symantec. "You can't do it all with people."
Make no mistake -- auditors will find fault with your systems, your processes, and the people who operate them. They're auditors. It's their job.
If you only knew the most common reasons for audit failure in advance, so that you could double-check your environment and fix those potential deal-busters before the auditor comes in. If you only had some tips from experts who have "been there" on how to shore up your environment to beat an audit.
Hey, wait a minute, that's what's in this article!
The following are eight tips offered by auditors, consultants, and others who have been through the IT security audit mill on what to look for in a compliance audit and how to beat those problems before an auditor fails you on them. It's not a comprehensive "cheat sheet," but it might give you some ideas on why companies fail their audits, and what you can do to avoid the same pitfalls.
If you have any ideas or tips that we've overlooked here, please post them to the message board attached to this article. We'd love to hear about your experiences with compliance audits -- and what you'd do differently if you had them to do all over again.
Contents:
* Page 2: Establish a consistent set of practices for change management
* Page 3: Keep your app developers away from production/operations
* Page 4: Give users access only to the data and apps they need
* Page 5: Shore up physical access to your systems
* Page 6: Establish methods to detect security anomalies -- and where they come from
* Page 7: Map your security processes to real business processes
* Page 8: Double (and triple) check your accounting processes
* Page 9: Document your work and train your users on what you've done
Next Page: Establish a consistent set of practices for change management
Eight Sure-Fire Ways to Beat a Security Audit
1. Establish a consistent set of practices for change management
JULY 9, 2007 | There's no such thing as a static IT environment. If you're not properly and consistently keeping track of changes in your organization, you've got a big fat problem. And the lack of a formal change management process could earn you a big fat "F" on your audit report.
Security audit experts say you need a formal document and change procedure, as well as oversight on changes -- Joe in accounting is now working from home instead of the office -- and reviews of your change logs. And you'd better know about that user who was recently fired, so you can immediately disable his account in case he has revenge in mind.
"Three years ago, companies had really poor change management, but today, their change management process [is] moving toward automation," says Paul Proctor, a research vice president at Gartner. "But there's still a hefty number of them that don't have any change management" at all, he says.
Auditors are typically tough on change management. IBM recommends documenting change-management policies and procedures and updating them regularly; reviewing, analyzing, and approving change requests; and testing changes before you make them, according to Robin Hogan, program manager for IBM governance and risk management.
But monitoring change isn't as easy as it sounds: "I've seen tables full of minutes from change-board meetings, forms completed appropriately -- but no evidence that the actual change itself was appropriately implemented, or even implemented at all," Hogan says. The key is an automated change management system that tracks what changes were made and by whom, then matches them to specific systems, she says.
Proctor says change management is more of process control issue than a technological one. Gartner recommends "change reconciliation," where you use tools like Tripwire and database monitoring to automatically detect any changes to data or files -- and then cross-check them with authorized changes.
"If you then go back to the CMDB [change management database] and reconcile things you detected with authorized change requests," that's change reconciliation, Proctor says. "This is to address auditor concerns to prove that nothing happened that shouldn't have" to the data.
But organizations have a ways to go on the reconciliation side -- Proctor doesn't expect it to become a regular part of the change management process for another four or five years. "The problem is you have to have tightly controlled change management, and every time you detect it, you have to go back and reconcile it."
Next Page: Keep your app developers away from production/operations
Eight Sure-Fire Ways to Beat a Security Audit
2. Keep your app developers away from production/operations
JULY 9, 2007 | With many large organizations outsourcing their IT operations and software development, a clean separation between your application developers and your operational, production systems is more crucial than ever.
"Application developers should not have access to the production environment," says Kris Lovejoy, IBM's director of strategy for governance and risk management.
By testing code in the operational environment, developers can either slow or disrupt business operations. With so many companies using third parties to develop their custom internal apps, the production environment can be extremely vulnerable, Lovejoy says. If operations and development aren't adequately segregated, auditors will be crawling all over you, she says.
And beware of leaving IT with indiscriminate access to systems and databases: "That includes giving a developer or programmer or database administrator access to a system that is completely unmonitored and uncontrolled to a production environment or production data," Lovejoy says. "This is why we see the market turning to identity and access management systems."
Proctor says this lack of a separation of duties is a major problem in organizations today: Anywhere from 60 to 70 percent of Gartner's clients give their developers access to production code, and about 25 percent of its clients provide all of their administrators access to everything. "This is more the case for smaller businesses, and they are getting hammered in their audits there. Auditors are stepping in and saying 'you need to fix this.' "
"The main reason [this problem] exists is because it got baked into the way companies do business. Their developers are the ones who put the operations code in place, and everybody has access to everything." This is a prime example of where the least-privilege approach to authorization would come in handy, he says.
It's all about separation of duties, says David Smith, senior regulatory compliance analyst for Symantec. A better approach would be to require a manager to approve what each IT person or individual user can do. "[Sarbanes-Oxley] auditors focus heavily on separation of duties."
"Any person who just does one thing on a system... can execute on that one system." But organizations may have to tweak the way their IT resources operate today, he says.
Next Page: Give users access only to the data and apps they need
Eight Sure-Fire Ways to Beat a Security Audit
3. Give users access only to the data and apps they need
JULY 9, 2007 | Access control is even more crucial when it comes to end users, who definitely don't need access to everything. Getting a handle on this aspect of compliance requires knowing who (and where) your users are, and what privileges they have.
IBM's Lovejoy says a lack of user access control is one of the top reasons companies fail an audit. "This is the inability to provision users effectively and administer their accounts... And take into account any changes in responsibility or to identify and revoke privileges when a user is terminated."
User access control is closely related to change management. One of the first steps in good change management is keeping people out of places where they shouldn't be, Gartner's Proctor says. "Everybody recognizes that it's well-intentioned people who cause most of the downtime and [security] problems... Any type of change in the system is a time a flaw can be introduced."
Identity and access management are the goal, Proctor says. "Most audit findings today say they either want you to control who has access to what, or be able to report on who has access to what," he says. A first-line manager should sign off on whether a particular user should have access to a particular system, he says.
Manual and homegrown provisioning just doesn't cut it anymore. "Now that you've got all these identity audit requirements, you need to be going with a tool for it," Proctor says. That means deploying a tool that automates -- and tracks -- the process for you, he says.
User access control shouldn't just be a gatekeeper function, either, Symantec's Smith says. If you think your passwords don't need to be as strong internally, think again: "It's easy [for an attacker] to plug in a laptop in the lobby of the building."
Next Page: Shore up physical access to your systems
Eight Sure-Fire Ways to Beat a Security Audit
4. Shore up physical access to your systems
JULY 9, 2007 | What, no biometric scanner on your data center door?
Physical access control to sensitive systems and equipment seems like a no-brainer, especially when you're preparing for an audit. But how much control do you need -- and how do you manage it?
Whether you need locks, sign-in sheets, fingerprint scans, or smart cards depends on the number of people who need access, as well as the level of sensitivity of the data. Gartner recommends deploying the minimum physical controls: "If only two people have access to a room, a lock and key work just fine -- just give the two of them the key," Gartner's Proctor says. "You don't need smart cards then."
If there are dozens of IT people who need access to a system, then it's time to look at a sign-in/out log, or smart cards, which also track dates and times of access.
Gartner suggests publishing your physical-access control policies, including who's allowed where. Enforcement is also important -- with pre-defined consequences -- as well as educating employees on these policies.
Some data-sensitive areas may require multi-factor authentication (think retinal scans), proximity cards, and video surveillance. This tightly controlled physical security should be driven by your business requirements, not by fears of an audit, according to Gartner.
The physical security problem is often exacerbated because IT security people and physical security people don't communicate, and may even work at cross-purposes. "There's a lot less synergy there than you'd expect," Proctor says.
Next Page: Establish methods to detect security anomalies -- and where they come from
Eight Sure-Fire Ways to Beat a Security Audit
5. Establish methods to detect security anomalies -- and where they come from
JULY 9, 2007 | If you can't monitor it, you can't manage it, the old adage goes, and this is certainly true when it comes to security compliance. One of the first things that auditors will ask your company is how it knows when someone -- either inside or outside the company -- is tampering with sensitive data. You need to be able to not only answer this question, but demonstrate it onscreen.
Until recently, most companies did their monitoring through some combination of real-time systems -- such as intrusion prevention systems (IPS) or security information management (SIM) tools -- and recursive analysis of log files to show who accessed which files, and when. SIM tools, in particular, have become a popular method of showing auditors security-related events in the enterprise, and what steps have been taken to prevent unauthorized access.
"Most of the time, organizations already have controls and policies in place," says Indy Chakrabarti, group product manager at Symantec, which makes SIM tools. "What they need is a way to lower the cost of compliance and enforcement, and that's what our tools are designed to do." (See A Multitude of SIMs.)
But a new class of vendors and products is also emerging for "compliance management," an idea which is sometimes expanded to include IT governance, risk, and compliance (GRC) management. These products are designed, in part, to monitor all the pieces of policy management and compliance, and warn enterprises when they are about to fall out of line with regulations or policies.
"We look at this as an opportunity to translate business requirements into IT activities and metrics that can be measured," says IBM's Lovejoy. "Security and compliance are an important part of that, but so are business resilience and service management."
Other vendors prefer to focus primarily on the compliance piece. "In most cases, the CXO is not interested in looking only at security events," says Dean Coza, director of product marketing at ArcSight. "They want to track new compliance problems, and do some baselining on how the organization is performing against policies and controls. A roles-based approach helps the company monitor not just how its systems are doing, but how its people are doing."
Whatever you decide about monitoring tools, you need to be sure that they can demonstrate to the auditor that your organization can track who is accessing sensitive data -- and can alert the troops when unauthorized access is taking place. If you have those systems in place and tested before the auditor arrives, you'll be a leg up when the audit begins.
Next Page: Map your security processes to real business processes
Eight Sure-Fire Ways to Beat a Security Audit
6. Map your security processes to real business processes
JULY 9, 2007 | When you're preparing for an audit, it's important to remember that the auditor's job is to find out whether your organization has sprung any security leaks -- and the audit process may differ from organization to organization. A compliance audit is not like a home inspection, where the inspector typically works from a checklist that you can review in advance. A flaw that might be overlooked in one business might cause another business to fail its audit.
As a result, IT and security organizations should resist the temptation to measure their compliance efforts against a pre-written "checklist" of compliance issues that can be crossed off like a grocery list. Many companies that take this approach are disappointed to learn that they've failed -- because while they have met the "letter" of compliance, they haven't considered its "spirit" -- the prevention of leaks that might hurt employees, customers, or investors.
Organizations often "try come to up with [an audit] checklist, versus looking at their business process," says Symantec's Smith. "Organizations need to be able to produce evidence that's useful to the auditor. If you do risk-based control management effectively, you can reduce the audit cycle, understand the questions they are going to ask, and be prepared."
Auditors typically ask a lot of questions about the business -- how it operates, who has access to information, and which data is the most sensitive, experts say. In regulatory environments where IT compliance requirements are vague, such as SOX or GLBA, the auditor's evaluation of your organization's compliance will depend on your ability to prove that you are protecting your most sensitive data during the course of day-to-day business -- not on a cookie-cutter list of compliance requirements.
If your security policy is effective and fits with the ebb and flow of information inside and outside your organization, you've got a good chance at passing your audit, experts say. But if you focus your efforts on the auditor's requirements, rather than business requirements, you may paradoxically find yourself on the wrong end of the auditor's pen.
Next Page: Double (and triple) check your accounting processes
Eight Sure-Fire Ways to Beat a Security Audit
7. Double (and triple) check your accounting processes
JULY 9, 2007 | One of the myths about SOX compliance is that it's all about proving the security of the organization's IT systems, experts say. But in fact, it's all about ensuring that a public company's financial data isn't tampered with -- from inside or out.
"What we've seen recently is that nearly half of the compliance deficiencies that companies encounter are on the accounting side, while less than 5 percent are IT systems related," said John Pescatore, vice president and distinguished analyst at Gartner, at the company's security summit in Washington, D.C. last month. If your organization fails its SOX audit, it's more likely to be a flaw in the way accounting is handled than anything to do with IT, he said. (See Security's Sea Change.)
Just a few weeks ago, the Public Company Accounting Oversight Board (PCAOB) -- a private, nonprofit entity that gives guidance to the many auditors who evaluate SOX compliance -- changed its guidelines to reflect more real-world threats around company financials, and softened some of the rules surrounding less-likely methods for tampering with financial data. (See New Rules May Ease SOX Audits.)
"[The PCAOB is] saying, 'let's stop and think about this,' " says Patrick Taylor, CEO of Oversight, which makes software for analyzing the accuracy and security of financial transactions. "Most financial fraud is going to occur in a rush, right at the end of a reporting period, when the company finds out that it's going to have some problems with its numbers," he says. "Those are going to be changes that somebody makes to the general ledger, which are relatively easy to detect.
"Contrast that with, say, backup," Taylor explains. "To commit financial fraud through a backup system, you'd have to gain access to the backup data, and then you'd have to have the knowledge to alter it. Then you'd somehow have to crash the operational systems so that the backup data would be put in place. That's a lot more complex, and a lot less likely, than making simple changes in the general ledger. And the audit process should reflect that."
Under the revised PCAOB guidelines, auditors will have the freedom to focus their attention on the transaction paths that could most likely lead to fraud, instead of auditing every possible transaction path to financial data. That means that most SOX audits will be much more heavily weighted toward accounting systems and practices, and scrutiny of the enterprise-wide IT security platform will likely be reduced, Taylor suggests.
The new rules might lighten the burden on IT, but they won't necessarily lessen the subjective nature of audits for regulations such as SOX and HIPAA, which leave a great deal of room for interpretation, says Chris Davis, manager of compliance knowledge management at Cybertrust.
"We'll get a lot more specificity on the business requirements, but not on the IT requirements," Davis suggests.
Next Page: Document your work and train your users on what you've done
Eight Sure-Fire Ways to Beat a Security Audit
8. Document your work and train your users on what you've done
JULY 9, 2007 | Compliance audits are like swinging a golf club, experts say: If you fail to follow through, you'll end up in the weeds.
Many auditors agree that two of their most common reasons for failing a company's compliance efforts are poor documentation and poor training programs. The best security policies and practices can still fail an audit if there is no clear system for implementing and enforcing them, they say.
"I've failed companies that passed 99 percent of the requirements but didn't do their training or documentation correctly," said Nigel Tranter, a partner at Payment Software Co., a leading Payment Card Industry Data Security Standard (PCI DSS) auditing firm, in an interview last year. (See Retailers Lag on Security Standard.)
Most auditors start their evaluations by reading the documentation of an organization's security efforts, experts say. Poor documentation -- or no documentation on some aspect of the compliance initiative -- is like holding a red cape in front of a bull, even if the technology and practices are working well.
Similarly, if the effort to train administrators and users on compliance is perceived to be weak, the audit worm can turn, according to those familiar with the process.
The key to good documentation and training is to constantly monitor and review them, and keep them updated as compliance-related changes are made in systems and practices, experts say. In a study conducted last year by the IT Policy Compliance Group, the companies rated "best in class" generally were companies that checked themselves for compliance every 21 days or less; many of the laggards do a self-audit only once or twice a year.
"What that says is that to be successful in compliance, you've got to find a way to do some automated monitoring," said Jim Hurley, managing director of the IT Policy Compliance Group and a research director at Symantec. "You can't do it all with people."
Harry Potter worm reports his death!!
Harry Potter fans investigating a Harry death rumour run the risk of activating a computer worm, Trend Micro has warned.
Malware authors are exploiting the current anticipation for the next Harry Potter book to trick fans into infecting their machines with a computer worm identified as WORM_HAIRY.A, the security vendor has found.
The worm arrives via removable drives and spreads by dropping copies of itself in all physical, removable and floppy drives. It also drops an AUTORUN.INF file to automatically execute dropped copies when the drives are accessed.
When executed, it drops several files including a Microsoft Word document with the message ‘Harry Potter is dead' as well as displaying a message at each system startup titled ‘read and repent'.
This is not the first time that the popularity of Harry Potter has been taken advantage of by authors of malware. A few weeks ago an email promising free tickets to the premiere of the upcoming Harry Potter and the Order of the Phoenix film was circulated which carried a Trojan.
"Once again these malware authors exploit highly-anticipated events in order to spread their malicious codes to the public," Samir Kirouani, senior technical engineer, Trend Micro Middle East and Africa said.
Potter fans around the world are anxiously waiting for the latest book, the last in the legendary series, to hit shelves on July 21.
Author JK Rowling has said that two characters will die in the novel leading to speculation that one may be the protagonist.
Investigate such rumours with caution, Trend Micro advised fans.
"Harry Potter fans could just treat the news of his death as an ugly rumour, but they should definitely exercise caution and practice safe surfing when investigating such claims online," Kirouani added.
Malware authors are exploiting the current anticipation for the next Harry Potter book to trick fans into infecting their machines with a computer worm identified as WORM_HAIRY.A, the security vendor has found.
The worm arrives via removable drives and spreads by dropping copies of itself in all physical, removable and floppy drives. It also drops an AUTORUN.INF file to automatically execute dropped copies when the drives are accessed.
When executed, it drops several files including a Microsoft Word document with the message ‘Harry Potter is dead' as well as displaying a message at each system startup titled ‘read and repent'.
This is not the first time that the popularity of Harry Potter has been taken advantage of by authors of malware. A few weeks ago an email promising free tickets to the premiere of the upcoming Harry Potter and the Order of the Phoenix film was circulated which carried a Trojan.
"Once again these malware authors exploit highly-anticipated events in order to spread their malicious codes to the public," Samir Kirouani, senior technical engineer, Trend Micro Middle East and Africa said.
Potter fans around the world are anxiously waiting for the latest book, the last in the legendary series, to hit shelves on July 21.
Author JK Rowling has said that two characters will die in the novel leading to speculation that one may be the protagonist.
Investigate such rumours with caution, Trend Micro advised fans.
"Harry Potter fans could just treat the news of his death as an ugly rumour, but they should definitely exercise caution and practice safe surfing when investigating such claims online," Kirouani added.
Paying for hacking tools
WabiSabiLabi Ltd, a Swiss company rolled out an interesting website that allows users to buy security vulnerabilities for unpatched software solutions. Although it might sound like some hot goodies for hackers, the owners sustain the flaws can be also bought by the security
companies or even by the parent firms in order to fix the programs. At this time, there are only 4 vulnerabilities for sale with prices between 500 euros and 2000 euros. There are only 2 bids for a Linux kernel memory leak and for an "unpatched SQL Injection vulnerability in MKPortal."
This site might represent a dangerous source of vulnerabilities especially for hackers because they would be able to attack a certain computer easier than before. For example, the site sells a Yahoo Messenger 8.1 security flaw for 2.000 euros, enabling hackers to attack an affected system without losing time when searching for vulnerabilities.
"Nobody in the pharmaceutical industry is blackmailing researchers (or the companies that are financing the research), to force them to release the results for free under an ethical disclosure policy," the WabiSabiLabi Web site mentions according to PC World.
However, the website raises concerns from the security companies because they are afraid of upcoming exploitations of unpatched flaws. "It's going to be eBay for vulnerabilities. We're looking at the potential of cyber warfare coming up. Now we're going to peddle vulnerabilities in a winner-takes-all auction. How do we know who's good and who's bad when we do this?" David Perry from Trend Micro Inc. said for the same source.
In the past iDefense Labs also paid for new vulnerabilities but it was only a contest meant to bring unpatched security flaws into spotlights. The prizes were quite attractive, numerous security experts joining the competition for the big awards offered by iDefense Labs.
companies or even by the parent firms in order to fix the programs. At this time, there are only 4 vulnerabilities for sale with prices between 500 euros and 2000 euros. There are only 2 bids for a Linux kernel memory leak and for an "unpatched SQL Injection vulnerability in MKPortal."
This site might represent a dangerous source of vulnerabilities especially for hackers because they would be able to attack a certain computer easier than before. For example, the site sells a Yahoo Messenger 8.1 security flaw for 2.000 euros, enabling hackers to attack an affected system without losing time when searching for vulnerabilities.
"Nobody in the pharmaceutical industry is blackmailing researchers (or the companies that are financing the research), to force them to release the results for free under an ethical disclosure policy," the WabiSabiLabi Web site mentions according to PC World.
However, the website raises concerns from the security companies because they are afraid of upcoming exploitations of unpatched flaws. "It's going to be eBay for vulnerabilities. We're looking at the potential of cyber warfare coming up. Now we're going to peddle vulnerabilities in a winner-takes-all auction. How do we know who's good and who's bad when we do this?" David Perry from Trend Micro Inc. said for the same source.
In the past iDefense Labs also paid for new vulnerabilities but it was only a contest meant to bring unpatched security flaws into spotlights. The prizes were quite attractive, numerous security experts joining the competition for the big awards offered by iDefense Labs.
Managed services market tipped for growth
The market for managed services is set to double between 2006 and 2010 to reach US$12.1 billion, according to Infonetics Research.
Growth will be driven by increasing security threats and the growing complexity of security solutions, the firm said.
Organisations of all sizes will look more to managed security services as security threats grow in number, security solutions become more complex and demand more management efforts, and as the service providers themselves add value to improve revenues and margins, according to a new report from the firm.
While there will not be a major spike in managed security service spending, strong incremental growth will continue beyond 2010, researchers predicted. Around 49% of security service revenue in 2006 came from managed firewall services, compared to 27% from content security and 24% from other security services.
The managed encrypted virtual private network (VPN) service market on the other hand is expected to decline in coming years after inching up 4% between 2005 and 2006 to $20.5 billion.
"Multi-protocol Label Switching (MPLS) services are really starting to steal business away from encrypted VPNs," said Jeff Wilson, principal analyst for network security at Infonetics Research. "This is having a significant impact on spending for managed IPSec site-to-site VPNs, especially among large organizations who are starting to migrate from complex self-managed IPSec VPNs to simpler carrier-managed MPLS services."
In 2006, 97% of VPN service revenue came from IPSec VPNs and only 3% from SSL VPNs, according to the report.
Growth will be driven by increasing security threats and the growing complexity of security solutions, the firm said.
Organisations of all sizes will look more to managed security services as security threats grow in number, security solutions become more complex and demand more management efforts, and as the service providers themselves add value to improve revenues and margins, according to a new report from the firm.
While there will not be a major spike in managed security service spending, strong incremental growth will continue beyond 2010, researchers predicted. Around 49% of security service revenue in 2006 came from managed firewall services, compared to 27% from content security and 24% from other security services.
The managed encrypted virtual private network (VPN) service market on the other hand is expected to decline in coming years after inching up 4% between 2005 and 2006 to $20.5 billion.
"Multi-protocol Label Switching (MPLS) services are really starting to steal business away from encrypted VPNs," said Jeff Wilson, principal analyst for network security at Infonetics Research. "This is having a significant impact on spending for managed IPSec site-to-site VPNs, especially among large organizations who are starting to migrate from complex self-managed IPSec VPNs to simpler carrier-managed MPLS services."
In 2006, 97% of VPN service revenue came from IPSec VPNs and only 3% from SSL VPNs, according to the report.
Google buys Postini
Google has stepped up its efforts to take on Microsoft Office with the $625m (£310m) purchase of web-based security provider Postini.
The search engine giant said the deal would allow it to provide more companies with web-based services similar to its Google Apps package.
Postini sells encryption and archiving software to more than 35,000 businesses and 10m users across the globe.
The deal is expected to be completed by the end of the third quarter.
'Wider appeal'
"With this transaction, we're reinforcing our commitment to delivering compelling hosted applications to businesses of all sizes," said Google chairman and chief executive Eric Schmidt.
"With the addition of Postini, our apps are not just simple and appealing to users - they can also streamline the complex information security mandates within these organisations."
The acquisition is the third biggest announced by Google after it snapped up DoubleClick for $3.1bn in April and its $1.65bn takeover of YouTube.
Google has been moving closer to directly taking on Microsoft's Office package of applications with the launch of a number of popular business web services including email, calendars, spreadsheets and word processing.
So far its Google Apps service has been adopted by 100,000 business, the firm said.
The search engine giant said the deal would allow it to provide more companies with web-based services similar to its Google Apps package.
Postini sells encryption and archiving software to more than 35,000 businesses and 10m users across the globe.
The deal is expected to be completed by the end of the third quarter.
'Wider appeal'
"With this transaction, we're reinforcing our commitment to delivering compelling hosted applications to businesses of all sizes," said Google chairman and chief executive Eric Schmidt.
"With the addition of Postini, our apps are not just simple and appealing to users - they can also streamline the complex information security mandates within these organisations."
The acquisition is the third biggest announced by Google after it snapped up DoubleClick for $3.1bn in April and its $1.65bn takeover of YouTube.
Google has been moving closer to directly taking on Microsoft's Office package of applications with the launch of a number of popular business web services including email, calendars, spreadsheets and word processing.
So far its Google Apps service has been adopted by 100,000 business, the firm said.
Fraudsters test credit cards
Fraudsters test credit cards with charity donations
http://www.itpro.co.uk/news/119319/fraudsters-test-credit-cards-with-charity-donations.html
Cyber criminals are using stolen credit cards to donate to charity, according to research by an anti-virus company. Yazan Gable, a researcher with Symantec's Security Response Team said that far from hackers suddenly becoming modern-day Robin Hoods, the scammers are sending their victim's money to charities in order to verify the card numbers are valid. Gable came across this activity when monitoring IRC channels set up on the internet by scammers to discuss stolen credit cards.
http://www.itpro.co.uk/news/119319/fraudsters-test-credit-cards-with-charity-donations.html
Cyber criminals are using stolen credit cards to donate to charity, according to research by an anti-virus company. Yazan Gable, a researcher with Symantec's Security Response Team said that far from hackers suddenly becoming modern-day Robin Hoods, the scammers are sending their victim's money to charities in order to verify the card numbers are valid. Gable came across this activity when monitoring IRC channels set up on the internet by scammers to discuss stolen credit cards.
Wednesday, June 27, 2007
Construction kits unleash variants
Multiple hacker groups are using a "construction kit" supplied by the author of a Trojan horse program discovered last October to develop and unleash more dangerous variants of the original malware.
Already such variants have stolen sensitive information belonging to at least 10,000 individuals and sent the data to rogue servers in China, Russia and the U.S., according to Don Jackson, a security researcher at SecureWorks Inc. in Atlanta. The stolen data includes Social Security numbers, online account information, bank account and credit card numbers, usernames and passwords, and other data that users would usually input during an SSL session.
The Prg Trojan, as it has been dubbed by SecureWorks, is a variant of another Trojan called wnspoem that was unearthed in October. Similar to wnspoem, the Prg Trojan and its variants are designed to sniff sensitive data from Windows internal memory buffers before the data is encrypted and sent to SSL-protected Web sites. The Trojans are programmed to send the stolen data to multiple servers around the world where it is stored in encrypted fashion and sold to others looking for such information. An analysis of log files on the servers storing the stolen data shows that a lot of the information is coming from corporate PCs, Jackson said.
The variants include a new function that allows them to listen on TCP port 6081 and wait for a remote attacker to connect and issue commands for forwarding data or for rummaging through files on the compromised system, Jackson said. The newer variants are also more configurable and can be programmed to send stolen data to its final destination via a chain of proxy servers. The new Prg variants encrypt stolen data differently from the original version, making older analysis tools obsolete, he said.
What makes the threat from the Prg Trojan especially potent is the availability of a construction tool kit that allows hackers to develop and release new versions of the code faster than antivirus vendors can devise applications, Jackson said. The tool kit allows hackers to recompile and pack the malicious code in countless subtly different ways so as to evade detection by antivirus engines typically looking for specific signatures to identify and block threats, Jackson said.
The tool kit appears to have been developed by the Russian authors of the original wnspoem Trojan and comes complete with a three-page instruction manual in Russian instructing buyers how to use it. Originally, the kit appears to have been sold to other hacker groups for around $1,000. But more recently it appears to have been posted on an underground site, where others have been downloading and using it, Jackson said.
"The hackers are literally infecting thousands of users with one particular variant and once that version of the Trojan is blocked by antivirus, the hackers simply launch a new one in its place," Jackson said.
One of the groups using the construction kit has been naming its attacks after makes of cars, including Ford, Bugatti and Mercedes, according to a SecureWorks description of the Trojan. The group has been spreading versions of the Trojan by taking advantage of vulnerabilities in the ADODB database wrapper library and other components of Windows and Internet Explorer, according to SecureWorks. That group alone may have snared data from more than 8,000 victims. Data stolen by this group's Trojan's are sent to servers based in the U.S. and China, according to SecureWorks.
Another group using the tool kit has been naming its attacks using the letter "H" and has sent its variants via spam e-mails to various individuals, SecureWorks said. One recent attack involved an e-mail with a subject line reading "HAPPY FATHER'S DAY." Data stolen by this group's Trojans is being sent back to servers in Russia. According to Jackson, many of those servers have separate staging areas on them with multiple versions of Prg Trojan programs that can be released as older versions get detected by antivirus software.
Already such variants have stolen sensitive information belonging to at least 10,000 individuals and sent the data to rogue servers in China, Russia and the U.S., according to Don Jackson, a security researcher at SecureWorks Inc. in Atlanta. The stolen data includes Social Security numbers, online account information, bank account and credit card numbers, usernames and passwords, and other data that users would usually input during an SSL session.
The Prg Trojan, as it has been dubbed by SecureWorks, is a variant of another Trojan called wnspoem that was unearthed in October. Similar to wnspoem, the Prg Trojan and its variants are designed to sniff sensitive data from Windows internal memory buffers before the data is encrypted and sent to SSL-protected Web sites. The Trojans are programmed to send the stolen data to multiple servers around the world where it is stored in encrypted fashion and sold to others looking for such information. An analysis of log files on the servers storing the stolen data shows that a lot of the information is coming from corporate PCs, Jackson said.
The variants include a new function that allows them to listen on TCP port 6081 and wait for a remote attacker to connect and issue commands for forwarding data or for rummaging through files on the compromised system, Jackson said. The newer variants are also more configurable and can be programmed to send stolen data to its final destination via a chain of proxy servers. The new Prg variants encrypt stolen data differently from the original version, making older analysis tools obsolete, he said.
What makes the threat from the Prg Trojan especially potent is the availability of a construction tool kit that allows hackers to develop and release new versions of the code faster than antivirus vendors can devise applications, Jackson said. The tool kit allows hackers to recompile and pack the malicious code in countless subtly different ways so as to evade detection by antivirus engines typically looking for specific signatures to identify and block threats, Jackson said.
The tool kit appears to have been developed by the Russian authors of the original wnspoem Trojan and comes complete with a three-page instruction manual in Russian instructing buyers how to use it. Originally, the kit appears to have been sold to other hacker groups for around $1,000. But more recently it appears to have been posted on an underground site, where others have been downloading and using it, Jackson said.
"The hackers are literally infecting thousands of users with one particular variant and once that version of the Trojan is blocked by antivirus, the hackers simply launch a new one in its place," Jackson said.
One of the groups using the construction kit has been naming its attacks after makes of cars, including Ford, Bugatti and Mercedes, according to a SecureWorks description of the Trojan. The group has been spreading versions of the Trojan by taking advantage of vulnerabilities in the ADODB database wrapper library and other components of Windows and Internet Explorer, according to SecureWorks. That group alone may have snared data from more than 8,000 victims. Data stolen by this group's Trojan's are sent to servers based in the U.S. and China, according to SecureWorks.
Another group using the tool kit has been naming its attacks using the letter "H" and has sent its variants via spam e-mails to various individuals, SecureWorks said. One recent attack involved an e-mail with a subject line reading "HAPPY FATHER'S DAY." Data stolen by this group's Trojans is being sent back to servers in Russia. According to Jackson, many of those servers have separate staging areas on them with multiple versions of Prg Trojan programs that can be released as older versions get detected by antivirus software.
End Point Security, how far should you go?
Every time a new security concern emerges within the IT industry, security vendors present products that claim to address the problem.
Once the main commodity security products had established themselves, along came VPNs, then SSL VPNs, and following some major scandals, regulatory compliance. Most recently, vendors have begun offering products to address customers’ needs for a Network Access Control (NAC) solution - but does this satisfy the need for comprehensive endpoint security?
Endpoint security should cover all aspects of activity at the endpoint and address both hidden potential threats and actual weaknesses that could result in a security breach. Many vendors offer products that resolve specific security issues related to the endpoint, and describe these as ‘endpoint security’ solutions. However, this is misleading for customers: for example, vendors offering products that control the use of memory sticks, digital cameras or any other type of USB memory device are not offering endpoint security, they are offering device control.
If this is claimed to prevent classified information leaving the organisation, customers are further misled, because copying to a device is not the only way to leak information. The same applies to vendors offering application control products: applications are just one category of security threat that may occur on an endpoint; even in networks that lock down installations so that only approved applications may be installed, the endpoint remains open to other security breaches.
Combining commodity security products such as personal firewalls, anti-virus and behavioural IDS/IPS does not constitute an endpoint security solution. These products should be obligatory for any security-savvy organisation wanting to keep its network safe. The layer of endpoint security needs to cover other less-monitored activities like processes, services and their configurations and start-up commands that boot with the OS, as well as the obvious application and device control.
Add to the mix some form of change-control that can identify a bypassed proxy or disabled group policy, plus functionality that includes detecting multiple network connections from a single PC or using a wireless connection while connected to a LAN, and one is closer to a full view of an endpoint’s activity throughout its connection to the network. A comprehensive solution must also have remediation capabilities to minimise the impact on administrators managing the company endpoints. A product that identifies problems but does not offer remediation cannot be considered a complete solution.
An endpoint security solution must address all aspects of misuse, misconfiguration and malicious activity. Most NAC products describe quarantining endpoints that do not conform to company policy (without necessarily offering any immediate remediation). They also suggest that each endpoint must exhibit a specific set of security requirements and show a clean bill of health without malware infections before being allowed to join the network.
The problem is that quite apart from the fact that the checks offered are not sufficient to provide a complete picture of the endpoint’s security status, they are almost always performed only when the endpoint joins the network. So, while NAC has its benefits and provides a valuable barrier against infected endpoints from joining an otherwise clean network, it is only a small part endpoint security, especially for endpoints fixed inside the network that may not log off at the end of a day. Unless a NAC solution offers complete endpoint security functionality on a continuous basis, it must be seen as a separate product that merely complements endpoint security.
A company sourcing an endpoint security solution usually does so either because it has already experienced a breach from within its network, or it perceives that a problem exists in controlling endpoint usage, which needs to be addressed before it becomes insurmountable. Before identifying a vendor, the company should identify known weaknesses in the security framework of its internal network. It should also try to define whether the main source of the problem is LAN-based endpoints or those that connect externally. Is it the users and what they bring into the network, such as portable memory devices, music players or software?
It could be lack of awareness or experience, evidenced by inadvertently disabling or removing critical applications, downloading from potentially harmful websites, or wasting resources using bandwidth sapping applications. There is a plethora of nuisances as well as threats that can compromise a network, some of them hidden. It is essential to source a solution that can identify both obvious and hidden threats efficiently and easily, and provide a mechanism to remedy the problems found.
Once the main commodity security products had established themselves, along came VPNs, then SSL VPNs, and following some major scandals, regulatory compliance. Most recently, vendors have begun offering products to address customers’ needs for a Network Access Control (NAC) solution - but does this satisfy the need for comprehensive endpoint security?
Endpoint security should cover all aspects of activity at the endpoint and address both hidden potential threats and actual weaknesses that could result in a security breach. Many vendors offer products that resolve specific security issues related to the endpoint, and describe these as ‘endpoint security’ solutions. However, this is misleading for customers: for example, vendors offering products that control the use of memory sticks, digital cameras or any other type of USB memory device are not offering endpoint security, they are offering device control.
If this is claimed to prevent classified information leaving the organisation, customers are further misled, because copying to a device is not the only way to leak information. The same applies to vendors offering application control products: applications are just one category of security threat that may occur on an endpoint; even in networks that lock down installations so that only approved applications may be installed, the endpoint remains open to other security breaches.
Combining commodity security products such as personal firewalls, anti-virus and behavioural IDS/IPS does not constitute an endpoint security solution. These products should be obligatory for any security-savvy organisation wanting to keep its network safe. The layer of endpoint security needs to cover other less-monitored activities like processes, services and their configurations and start-up commands that boot with the OS, as well as the obvious application and device control.
Add to the mix some form of change-control that can identify a bypassed proxy or disabled group policy, plus functionality that includes detecting multiple network connections from a single PC or using a wireless connection while connected to a LAN, and one is closer to a full view of an endpoint’s activity throughout its connection to the network. A comprehensive solution must also have remediation capabilities to minimise the impact on administrators managing the company endpoints. A product that identifies problems but does not offer remediation cannot be considered a complete solution.
An endpoint security solution must address all aspects of misuse, misconfiguration and malicious activity. Most NAC products describe quarantining endpoints that do not conform to company policy (without necessarily offering any immediate remediation). They also suggest that each endpoint must exhibit a specific set of security requirements and show a clean bill of health without malware infections before being allowed to join the network.
The problem is that quite apart from the fact that the checks offered are not sufficient to provide a complete picture of the endpoint’s security status, they are almost always performed only when the endpoint joins the network. So, while NAC has its benefits and provides a valuable barrier against infected endpoints from joining an otherwise clean network, it is only a small part endpoint security, especially for endpoints fixed inside the network that may not log off at the end of a day. Unless a NAC solution offers complete endpoint security functionality on a continuous basis, it must be seen as a separate product that merely complements endpoint security.
A company sourcing an endpoint security solution usually does so either because it has already experienced a breach from within its network, or it perceives that a problem exists in controlling endpoint usage, which needs to be addressed before it becomes insurmountable. Before identifying a vendor, the company should identify known weaknesses in the security framework of its internal network. It should also try to define whether the main source of the problem is LAN-based endpoints or those that connect externally. Is it the users and what they bring into the network, such as portable memory devices, music players or software?
It could be lack of awareness or experience, evidenced by inadvertently disabling or removing critical applications, downloading from potentially harmful websites, or wasting resources using bandwidth sapping applications. There is a plethora of nuisances as well as threats that can compromise a network, some of them hidden. It is essential to source a solution that can identify both obvious and hidden threats efficiently and easily, and provide a mechanism to remedy the problems found.
Los Alamos breached again!!
What's going on at Los Alamos? The nation's premier nuclear-weapons laboratory appears plagued with continuing security problems. Barely 10 days after revelations of a leak of highly classified material over the Internet, NEWSWEEK has learned of two other security breaches.
In late May, a Los Alamos staffer took his lab laptop with him on vacation to Ireland. A senior nuclear official familiar with the inner workings of Los Alamos—who would not be named talking about internal matters—says the laptop's hard drive contained "government documents of a sensitive nature." The laptop was also fitted with an encryption card advanced enough that its export is government-controlled. In Ireland, the laptop was stolen from the vacationer's hotel room. It has not been recovered. This source adds that Los Alamos has started a frantic effort to inventory all its laptops, calling in most of them and substituting nonportable desktop models. (The source’s account was confirmed by a midlevel Los Alamos official who also requests anonymity owing to the sensitivity of the subject.)
Then, 10 days ago, a Los Alamos scientist fired off an e-mail to colleagues at the Nevada nuclear test site. The scientist works in Los Alamos's P Division, which does experimental physics related to weapons design, a lab source says. The material he e-mailed was "highly classified," the same source says. But he sent his e-mail over the open Internet, rather than through the secure defense network.
These incidents come as Los Alamos is still reeling from the revelation that, in January, half a dozen board members of the company that manages the lab circulated—over the Internet—an e-mail to each other containing the most highly classified information about the composition of America's nuclear arsenal. The two sources tell NEWSWEEK that the e-mail concerned what the weapons community calls "special nuclear materials," the other ingredients besides uranium or plutonium at the core of nuclear weapons. The sources confirm to NEWSWEEK that the breach was rated "category one," meaning it posed "the most serious threats to national security interests."
Los Alamos spokesman Jeff Berger referred questions about the January breach to the Department of Energy or its specialist agency, the National Nuclear Security Administration. Regarding the e-mail to the Nevada test site, Berger said: "The purported incident is under investigation; it would be inappropriate to comment." As for the laptop stolen in Ireland, Berger confirmed the event, but said "information contained on the computer was of sufficiently low sensitivity that, had the employee followed proper laboratory procedure, he would have been authorized to take it to Ireland." About the encryption card, Berger said: "Ireland is a country that wouldn't have posed any export problems." He confirmed that, in the wake of this incident, Los Alamos is "in the process of narrowly restricting the use of laptops for foreign travel," while also working "to strengthen our employees' awareness of their responsibilities for protecting government equipment and the proper laboratory procedures for off-site usage."
Bryan Wilkes, spokesman for the National Nuclear Security Administration, said that, in taking his laptop to Ireland, the employee "did violate lab policy"—though Wilkes confirmed that, had the employee asked, permission would have been granted. Wilkes declined to comment for the record on the Nevada e-mail. Regarding the circulation in January of highly classified weapons information over the Internet, Wilkes said that everything the department had to say on the matter could be found in a June 15 letter sent by Energy Secretary Samuel Bodman to Rep. John Dingell, chair of the House Energy & Commerce Committee, which oversees the nuclear weapons complex.
"I can affirm that an individual did in fact unintentionally transmit sensitive information through an unsecured e-mail system," Bodman wrote Dingell. But Bodman played down its significance: "While serious, the incident in question was the result of human error, not a failure of security systems. The Department makes every effort to minimize inadvertent human errors, but we recognize that such errors may occur from time. Therefore, we have a robust system in place to report and investigate potential violations. In my opinion this is a circumstance where those systems worked well."
Bodman's professed reassurance is unlikely to satisfy those people—many within the nuclear weapons community—who are concerned by what appears to be a pattern of security problems at Los Alamos stretching back some years. "Boys will be boys, seems to be Bodman's message," one very senior figure in the weapons community said sarcastically: "I doubt that will appease John Dingell." Dingell's staff was unable to respond by deadline to a request for comment. But Dingell has talked in the past of his concerns at what seems to be deeply rooted problems at Los Alamos. Appearing in January before one of Dingell's sub-committees, Thomas D'Agostino, deputy administrator for weapons programs at the NNSA, agreed that successive security breaches at Los Alamos pointed to a failure of what he called "the security culture" there.
D'Agostino promised tough action: "Make no doubt about this. If the current laboratory management is unable or unwilling to change the security culture at LANL, I will use every management tool available to me" to force action, he said in testimony.
In late May, a Los Alamos staffer took his lab laptop with him on vacation to Ireland. A senior nuclear official familiar with the inner workings of Los Alamos—who would not be named talking about internal matters—says the laptop's hard drive contained "government documents of a sensitive nature." The laptop was also fitted with an encryption card advanced enough that its export is government-controlled. In Ireland, the laptop was stolen from the vacationer's hotel room. It has not been recovered. This source adds that Los Alamos has started a frantic effort to inventory all its laptops, calling in most of them and substituting nonportable desktop models. (The source’s account was confirmed by a midlevel Los Alamos official who also requests anonymity owing to the sensitivity of the subject.)
Then, 10 days ago, a Los Alamos scientist fired off an e-mail to colleagues at the Nevada nuclear test site. The scientist works in Los Alamos's P Division, which does experimental physics related to weapons design, a lab source says. The material he e-mailed was "highly classified," the same source says. But he sent his e-mail over the open Internet, rather than through the secure defense network.
These incidents come as Los Alamos is still reeling from the revelation that, in January, half a dozen board members of the company that manages the lab circulated—over the Internet—an e-mail to each other containing the most highly classified information about the composition of America's nuclear arsenal. The two sources tell NEWSWEEK that the e-mail concerned what the weapons community calls "special nuclear materials," the other ingredients besides uranium or plutonium at the core of nuclear weapons. The sources confirm to NEWSWEEK that the breach was rated "category one," meaning it posed "the most serious threats to national security interests."
Los Alamos spokesman Jeff Berger referred questions about the January breach to the Department of Energy or its specialist agency, the National Nuclear Security Administration. Regarding the e-mail to the Nevada test site, Berger said: "The purported incident is under investigation; it would be inappropriate to comment." As for the laptop stolen in Ireland, Berger confirmed the event, but said "information contained on the computer was of sufficiently low sensitivity that, had the employee followed proper laboratory procedure, he would have been authorized to take it to Ireland." About the encryption card, Berger said: "Ireland is a country that wouldn't have posed any export problems." He confirmed that, in the wake of this incident, Los Alamos is "in the process of narrowly restricting the use of laptops for foreign travel," while also working "to strengthen our employees' awareness of their responsibilities for protecting government equipment and the proper laboratory procedures for off-site usage."
Bryan Wilkes, spokesman for the National Nuclear Security Administration, said that, in taking his laptop to Ireland, the employee "did violate lab policy"—though Wilkes confirmed that, had the employee asked, permission would have been granted. Wilkes declined to comment for the record on the Nevada e-mail. Regarding the circulation in January of highly classified weapons information over the Internet, Wilkes said that everything the department had to say on the matter could be found in a June 15 letter sent by Energy Secretary Samuel Bodman to Rep. John Dingell, chair of the House Energy & Commerce Committee, which oversees the nuclear weapons complex.
"I can affirm that an individual did in fact unintentionally transmit sensitive information through an unsecured e-mail system," Bodman wrote Dingell. But Bodman played down its significance: "While serious, the incident in question was the result of human error, not a failure of security systems. The Department makes every effort to minimize inadvertent human errors, but we recognize that such errors may occur from time. Therefore, we have a robust system in place to report and investigate potential violations. In my opinion this is a circumstance where those systems worked well."
Bodman's professed reassurance is unlikely to satisfy those people—many within the nuclear weapons community—who are concerned by what appears to be a pattern of security problems at Los Alamos stretching back some years. "Boys will be boys, seems to be Bodman's message," one very senior figure in the weapons community said sarcastically: "I doubt that will appease John Dingell." Dingell's staff was unable to respond by deadline to a request for comment. But Dingell has talked in the past of his concerns at what seems to be deeply rooted problems at Los Alamos. Appearing in January before one of Dingell's sub-committees, Thomas D'Agostino, deputy administrator for weapons programs at the NNSA, agreed that successive security breaches at Los Alamos pointed to a failure of what he called "the security culture" there.
D'Agostino promised tough action: "Make no doubt about this. If the current laboratory management is unable or unwilling to change the security culture at LANL, I will use every management tool available to me" to force action, he said in testimony.
Microsoft security response listed as one of the worst jobs.. :-)
What do whale-feces researchers, hazmat divers, and employees of Microsoft's Security Response Center have in common? They all made Popular Science magazine's 2007 list of the absolute worst jobs in science.
Popular Science has been compiling the list since 2003, as "a way to celebrate the crazy variety of jobs that there are in science," said Michael Moyer, the magazine's executive editor. Past entrants have included barnyard masturbator, Kansas biology teacher, and U.S. Metric system advocate.
Moyer said Microsoft's Security Response Center (MSRC) made the grade this year because the job is just so hard and thankless. "It's one of those classic jobs, which isn't gross or dangerous in any way, but the overwhelmingness of the task at hand makes it so daunting that only the most intrepid would venture there."
The MSRC ranked near the middle as the sixth-worst job in this year's list, published in the July issue of the magazine. "We did rate the Microsoft security researcher as less-bad than the people who prepare the carcasses for dissection in biology laboratories," Moyer said.
The absolute worst job? Hazmat diver. "These are highly trained individuals who strap on scuba dear and dive into toxic sludge," Moyer explained.
Microsoft's Mark Griesi considers ranking among the worst as a badge of honor, in part because his grandfather read the story and thought it was "pretty cool to see my team on the list," he said.
Working at the response center "is one of the toughest jobs to have," said Griesi, a program manager with the MSRC. "But with tough challenges come great reward. The article does call out the dedication that the people in all of these jobs have, and I have never worked with a more dedicated group then the MSRC."
Still, the MSRC is not for everyone. Moyer didn't have to think long when asked whether he'd rather have the number 10-ranked whale research job. "Whale feces or working at Microsoft? I would probably be the whale feces researcher," he said. "Salt air and whale flatulence; what could go wrong?"
Popular Science has been compiling the list since 2003, as "a way to celebrate the crazy variety of jobs that there are in science," said Michael Moyer, the magazine's executive editor. Past entrants have included barnyard masturbator, Kansas biology teacher, and U.S. Metric system advocate.
Moyer said Microsoft's Security Response Center (MSRC) made the grade this year because the job is just so hard and thankless. "It's one of those classic jobs, which isn't gross or dangerous in any way, but the overwhelmingness of the task at hand makes it so daunting that only the most intrepid would venture there."
The MSRC ranked near the middle as the sixth-worst job in this year's list, published in the July issue of the magazine. "We did rate the Microsoft security researcher as less-bad than the people who prepare the carcasses for dissection in biology laboratories," Moyer said.
The absolute worst job? Hazmat diver. "These are highly trained individuals who strap on scuba dear and dive into toxic sludge," Moyer explained.
Microsoft's Mark Griesi considers ranking among the worst as a badge of honor, in part because his grandfather read the story and thought it was "pretty cool to see my team on the list," he said.
Working at the response center "is one of the toughest jobs to have," said Griesi, a program manager with the MSRC. "But with tough challenges come great reward. The article does call out the dedication that the people in all of these jobs have, and I have never worked with a more dedicated group then the MSRC."
Still, the MSRC is not for everyone. Moyer didn't have to think long when asked whether he'd rather have the number 10-ranked whale research job. "Whale feces or working at Microsoft? I would probably be the whale feces researcher," he said. "Salt air and whale flatulence; what could go wrong?"
Monday, June 25, 2007
AOL passwords, 8 characters or more... ??
Users can enter up to 16 characters as a password, but the system only reads the first 8 and discards the rest. They are basically truncating the password at 8 characters.
A reader wrote in Friday with an interesting observation: When he went to access his AOL.com account, he accidentally entered an extra character at the end of his password. But that didn’t stop him from entering his account. Curious, the reader tried adding multiple alphanumeric sequences after his password, and each time it logged him in successfully.
It turns out that when someone signs up for an AOL.com account, the user appears to be allowed to enter up to a 16-character password. AOL’s system, however, doesn’t read past the first eight characters.
And if you can’t work out what’s wrong with this..well.
How is this a bad set-up, security-wise? Well, let’s take a fictional AOL user named Bob Jones, who signs up with AOL using the user name BobJones. Bob — thinking himself very clever — sets his password to be BobJones$4e?0. Now, if Bob’s co-worker Alice or arch nemesis Charlie tries to guess his password, probably the first password he or she will try is Bob’s user name, since people are lazy and often use their user name as their password.
And she’d be right, in this case, because even though Bob thinks he created a pretty solid 13-character password — complete with numerals, non-standard characters, and letters — the system won’t read past the first eight characters of the password he set, which in this case is exactly the same as his user name. Bob may never be aware of this: The AOL system also will just as happily accept BobJones for his password as it will BobJones$4e?0 (or BobJones + anything else, for that matter).
Not smart eh? AOL apparently are ‘looking into it’ and that’s all they’ve said regarding the matter.
A reader wrote in Friday with an interesting observation: When he went to access his AOL.com account, he accidentally entered an extra character at the end of his password. But that didn’t stop him from entering his account. Curious, the reader tried adding multiple alphanumeric sequences after his password, and each time it logged him in successfully.
It turns out that when someone signs up for an AOL.com account, the user appears to be allowed to enter up to a 16-character password. AOL’s system, however, doesn’t read past the first eight characters.
And if you can’t work out what’s wrong with this..well.
How is this a bad set-up, security-wise? Well, let’s take a fictional AOL user named Bob Jones, who signs up with AOL using the user name BobJones. Bob — thinking himself very clever — sets his password to be BobJones$4e?0. Now, if Bob’s co-worker Alice or arch nemesis Charlie tries to guess his password, probably the first password he or she will try is Bob’s user name, since people are lazy and often use their user name as their password.
And she’d be right, in this case, because even though Bob thinks he created a pretty solid 13-character password — complete with numerals, non-standard characters, and letters — the system won’t read past the first eight characters of the password he set, which in this case is exactly the same as his user name. Bob may never be aware of this: The AOL system also will just as happily accept BobJones for his password as it will BobJones$4e?0 (or BobJones + anything else, for that matter).
Not smart eh? AOL apparently are ‘looking into it’ and that’s all they’ve said regarding the matter.
All you need to know about the Mpack attack
We mentioned a large MPack compromise in a diary two days ago. Since then we've been accumulating more information about what is going on behind the scenes. Earlier today VeriSign/iDefense released some pretty good analysis of how it works, what the value of it is, and other goodies. This summary does not exist online but has been spread via email to the media and other outlets. Rather than trying to summarize it, iDefense gave the Internet Storm Center permission to reprint it in its entirety. Thanks, iDefense!
Greetings All,
MPack is the latest and greatest tool for sale on the Russian Underground. $ash sells MPack for around $500-1,000. In a recent posting $ash attempted to sell a "loader" for $300 and a kit for $1,000. The author claims that attacks are 45-50 percent successful, including the animated cursor exploit and many others, including ANI overflow, MS06-014, MS06-006, MS06-044, XML Overflow, WebViewFolderIcon Overflow, WinZip ActiveX Overflow, QuickTime Overflow (all these are $ash names for exploits). Attacks from MPack , aka WebAttacker II, date back to October 2006 and account for roughly 10 percent of web based exploitation today according to one public source.
More than 10,000 referral domains exist in a recent MPack attack, largely successful MPack attack in Italy, compromising at least 80,000 unique IP addresses. It is likely that cPanel exploitation took place on host provider leading to injected iFrames on domains hosted on the server. When a legitimate page with a hostile iFrame is loaded the tool silently redirects the victim in an iFrame to an exploit page crafted by MPack. This exploit page, in a very controlled manner, executes exploits until exploitation is successful, and then installs malicious code of the attacker's choice.
Torpig is one of the known payloads for MPack attacks to date. This code relates back to the Russian Business Network (RBN), through which many Internet-based attacks take place today. The RBN is a virtual safe house for attacks out of Saint Petersburg, Russia, responsible for Torpig and other malicious code attacks, phishing attacks, child pornography and other illicit operations. The Italian hosts responsible for most of the domains seen in a recent MPack attack are using cPanel, a Web administration tool for clients. A zero-day cPanel attack took place in the fall of 2006 leading up to the large scale vector mark-up language (VML) attacks at that time. It appears likely that the Russian authors of the cPanel exploit, Step57.info, who are also related to the RBN used the exploit to compromise the Italian ISP and referral domains used in the latest mPack attack.
MPack uses a command and control website interface for reporting of MPack success. A JPEG screenshot of a recent attack is attached to this message.
QUOTES
1. MPack is a powerful Web exploitation tool that claims about 50 percent success in attacks silently launched against Web browsers.
2. $ash is the primary Russian actor attempting to sell mPack on the underground, for about $1,000 for the complete MPack kit.
3. MPack leverages multiple exploits, in a very controlled manner, to compromise vulnerable computers. Exploits range from the recent animated cursor (ANI) to QuickTime exploitation. The latest version of mPack, .90, includes the following exploits:
MS06-014
MS06-006
MS06-044
MS06-071
MS06-057
WinZip ActiveX overflow
QuickTime overflow
MS07-017
4. The Russian Business Network (RBN) is one of the most notorious criminal groups on the Internet today. A recent MPack attack installed Torpig malicious code hosted on an RBN server. RBN is closely tied to multiple attacks including Step57.info cPanel exploitation, VML, phishing, child pornography, Torpig, Rustock, and many other criminal attacks to date. Nothing good ever comes out of the Russian Business Network net block.
5. MPack attacks experience high success, according to attack log files analyzed by VeriSign-iDefense. In just a few hours more than 2,000 new victims reported to an MPack command and control website. A recent attack, largely focused in the area of Italy, involved more than 80,000 unique IPs.
Greetings All,
MPack is the latest and greatest tool for sale on the Russian Underground. $ash sells MPack for around $500-1,000. In a recent posting $ash attempted to sell a "loader" for $300 and a kit for $1,000. The author claims that attacks are 45-50 percent successful, including the animated cursor exploit and many others, including ANI overflow, MS06-014, MS06-006, MS06-044, XML Overflow, WebViewFolderIcon Overflow, WinZip ActiveX Overflow, QuickTime Overflow (all these are $ash names for exploits). Attacks from MPack , aka WebAttacker II, date back to October 2006 and account for roughly 10 percent of web based exploitation today according to one public source.
More than 10,000 referral domains exist in a recent MPack attack, largely successful MPack attack in Italy, compromising at least 80,000 unique IP addresses. It is likely that cPanel exploitation took place on host provider leading to injected iFrames on domains hosted on the server. When a legitimate page with a hostile iFrame is loaded the tool silently redirects the victim in an iFrame to an exploit page crafted by MPack. This exploit page, in a very controlled manner, executes exploits until exploitation is successful, and then installs malicious code of the attacker's choice.
Torpig is one of the known payloads for MPack attacks to date. This code relates back to the Russian Business Network (RBN), through which many Internet-based attacks take place today. The RBN is a virtual safe house for attacks out of Saint Petersburg, Russia, responsible for Torpig and other malicious code attacks, phishing attacks, child pornography and other illicit operations. The Italian hosts responsible for most of the domains seen in a recent MPack attack are using cPanel, a Web administration tool for clients. A zero-day cPanel attack took place in the fall of 2006 leading up to the large scale vector mark-up language (VML) attacks at that time. It appears likely that the Russian authors of the cPanel exploit, Step57.info, who are also related to the RBN used the exploit to compromise the Italian ISP and referral domains used in the latest mPack attack.
MPack uses a command and control website interface for reporting of MPack success. A JPEG screenshot of a recent attack is attached to this message.
QUOTES
1. MPack is a powerful Web exploitation tool that claims about 50 percent success in attacks silently launched against Web browsers.
2. $ash is the primary Russian actor attempting to sell mPack on the underground, for about $1,000 for the complete MPack kit.
3. MPack leverages multiple exploits, in a very controlled manner, to compromise vulnerable computers. Exploits range from the recent animated cursor (ANI) to QuickTime exploitation. The latest version of mPack, .90, includes the following exploits:
MS06-014
MS06-006
MS06-044
MS06-071
MS06-057
WinZip ActiveX overflow
QuickTime overflow
MS07-017
4. The Russian Business Network (RBN) is one of the most notorious criminal groups on the Internet today. A recent MPack attack installed Torpig malicious code hosted on an RBN server. RBN is closely tied to multiple attacks including Step57.info cPanel exploitation, VML, phishing, child pornography, Torpig, Rustock, and many other criminal attacks to date. Nothing good ever comes out of the Russian Business Network net block.
5. MPack attacks experience high success, according to attack log files analyzed by VeriSign-iDefense. In just a few hours more than 2,000 new victims reported to an MPack command and control website. A recent attack, largely focused in the area of Italy, involved more than 80,000 unique IPs.
More on the Harry Potter Hack
Well, it was bound to happen. The "research" chat rooms and mailing lists are all buzzing about the clever hack that somebody claims to have pulled off. We'll know for sure when the book comes out and we confirm or deny what's going on. We're not going to reveal the supposed ending for those who enjoy reading the series about the young wizard but there's plenty of web sites that are already spoiling the fun. So if you know somebody who is a Harry Potter fan and doesn't want to be spoiled, warn them about the supposed leak.
If it's true, then the way the bandit pulled of the heist should be noted by anybody responsible for protecting "secrets" whether they are national secrets, homeland security secrets (ahem!), or intellectual property secrets. According to anonymous posts on a popular mailing list, a "usual milw0rm downloaded exploit" was delivered by targeting email to employees of the publishing company. One or more employees clicked on the link, a browser opened, and they clicked on an animated icon. The malware in the animated icon then opened up a reverse shell and it was game over. Apparently there were plenty of draft copies laying around inside the company's harddrives so downloading a personal copy was easy. I suppose if you watched The Devil Wears Prada last year you are thinking "yes, that's probably true."
Note to CIOs: you must recognize targeted attacks as a serious threat to the protection of your organization's intellectual property. This is no longer just a theory or academic exercise.
If it's true, then the way the bandit pulled of the heist should be noted by anybody responsible for protecting "secrets" whether they are national secrets, homeland security secrets (ahem!), or intellectual property secrets. According to anonymous posts on a popular mailing list, a "usual milw0rm downloaded exploit" was delivered by targeting email to employees of the publishing company. One or more employees clicked on the link, a browser opened, and they clicked on an animated icon. The malware in the animated icon then opened up a reverse shell and it was game over. Apparently there were plenty of draft copies laying around inside the company's harddrives so downloading a personal copy was easy. I suppose if you watched The Devil Wears Prada last year you are thinking "yes, that's probably true."
Note to CIOs: you must recognize targeted attacks as a serious threat to the protection of your organization's intellectual property. This is no longer just a theory or academic exercise.
The BOT's are back!
Spammers responsible for last year's Blue Security hack attacks, which threw the blogosphere into turmoil, have carried out serious attacks on anti-spam services.
Using a nasty variant of the Storm Worm and botnets of hijacked PCs, they successfully shut down the three websevers that power the Spamhaus Project, URIBL (Realtime URI Blacklists) and SURBL (Spam URI Realtime Blocklists).
Steve Linford of the Spamhaus Project released the following statement explaining the ferocity of the attacks yesterday.
"The attack is being carried out by the same people responsible for the BlueSecurity DDoS last year, using the Storm malware.
"The attack method was sufficiently different to previous DDoS attacks on us that some of it got through our normal anti-DDoS defenses and halted our web servers.
"At 02:00 GMT we got the attack under control and our web servers are now back up, www.spamhaus.org is running again as normal.
"The attack is ongoing, but it's being absorbed by anti-DDoS defenses. Also under attack by the same gang are SURBL and URIBL.
"Storm is the 'nightmare' botnet, capable of taking out government \facilities and causing much mayhem on the internet. It has 3 functions; sending spam, fast-flux web and dns hosting mainly for stock scams, and DDoS. There is a hefty international effort underway by cyber-forensics teams in a joint effort by law enforcement and private sector botnet and malware analysts to trace the perpetrators."
Despite Linford's assurances that Spamhaus Project's site was back in business, attempts to log in this afternoon were persistently met with error messages, suggesting it had again falled victim to a denial of service attack.
Using a nasty variant of the Storm Worm and botnets of hijacked PCs, they successfully shut down the three websevers that power the Spamhaus Project, URIBL (Realtime URI Blacklists) and SURBL (Spam URI Realtime Blocklists).
Steve Linford of the Spamhaus Project released the following statement explaining the ferocity of the attacks yesterday.
"The attack is being carried out by the same people responsible for the BlueSecurity DDoS last year, using the Storm malware.
"The attack method was sufficiently different to previous DDoS attacks on us that some of it got through our normal anti-DDoS defenses and halted our web servers.
"At 02:00 GMT we got the attack under control and our web servers are now back up, www.spamhaus.org is running again as normal.
"The attack is ongoing, but it's being absorbed by anti-DDoS defenses. Also under attack by the same gang are SURBL and URIBL.
"Storm is the 'nightmare' botnet, capable of taking out government \facilities and causing much mayhem on the internet. It has 3 functions; sending spam, fast-flux web and dns hosting mainly for stock scams, and DDoS. There is a hefty international effort underway by cyber-forensics teams in a joint effort by law enforcement and private sector botnet and malware analysts to trace the perpetrators."
Despite Linford's assurances that Spamhaus Project's site was back in business, attempts to log in this afternoon were persistently met with error messages, suggesting it had again falled victim to a denial of service attack.
Trusted Computing Group, turns its attentions to storage
he Trusted Computing Group has announced a draft specification aimed at helping block unauthorized access to sensitive data on hard drives, flash drives, tape cartridges and optical disks. These devices won't release data unless the access request is validated by their own on-drive security function.
David Hill, a principal in the Mesabi Group, said: "The public media blares the loss of confidential information on large numbers of individuals on what seems a daily basis, and that is only the tip of the data breach iceberg for not having trusted storage. Trusted storage will soon be seen as a necessity --not just a nice to have -- by all organizations."
The Trusted Computing Group (TCG) is a not-for-profit industry-standards organization with the aim of enhancing the security of computers operating in disparate platforms. Its draft, developed by more than 60 of the TCG's 2175 member companies, specifies an architecture which defines how accessing devices could interact with storage devices to prevent unwanted access.
Storage devices would interact with a trusted element in host systems, generally a Trusted Platform Module (TPM), which is embedded into most enterprise PCs. The trust and security functions from the specification could be implemented by a combination of firmware and hardware on the storage device. Platform-based applications can then utilize these functions through a trusted command interface negotiated with the SCSI and ATA standards committees.
Thus a server or PC application could issue access requests to a disk drive and provide a key, random number or hash value. The drive hardware and/or firmware checks that this is valid and then supplies the data, decrypting it if necessary. Future versions of the SATA, SCSI and SAS storage interfaces would be extended to support the commands and parameters needed for such access validity checking.
Mark Re, Seagate Research SVP, said: "Putting trust and security functions directly in the storage device is a novel idea, but that is where the sensitive data resides. Implementing open, standards-based security solutions for storage devices will help ensure that system interoperability and manageability are greatly improved, from the individual laptop to the corporate data center." Seagate already has an encrypting drive.
Marcia Bencala, Hitachi GST's marketing and strategy VP, said: "Hitachi's Travelstar mobile hard drives support bulk data encryption today and we intend to incorporate the final Trusted Storage Specification as a vital part of our future-generation products."
The TCG has formed a Key Management Services subgroup, to provide a method to manage cryptographic keys.
Final TCG specifications will be published soon but companies could go ahead and implement based on the draft spec.
David Hill, a principal in the Mesabi Group, said: "The public media blares the loss of confidential information on large numbers of individuals on what seems a daily basis, and that is only the tip of the data breach iceberg for not having trusted storage. Trusted storage will soon be seen as a necessity --not just a nice to have -- by all organizations."
The Trusted Computing Group (TCG) is a not-for-profit industry-standards organization with the aim of enhancing the security of computers operating in disparate platforms. Its draft, developed by more than 60 of the TCG's 2175 member companies, specifies an architecture which defines how accessing devices could interact with storage devices to prevent unwanted access.
Storage devices would interact with a trusted element in host systems, generally a Trusted Platform Module (TPM), which is embedded into most enterprise PCs. The trust and security functions from the specification could be implemented by a combination of firmware and hardware on the storage device. Platform-based applications can then utilize these functions through a trusted command interface negotiated with the SCSI and ATA standards committees.
Thus a server or PC application could issue access requests to a disk drive and provide a key, random number or hash value. The drive hardware and/or firmware checks that this is valid and then supplies the data, decrypting it if necessary. Future versions of the SATA, SCSI and SAS storage interfaces would be extended to support the commands and parameters needed for such access validity checking.
Mark Re, Seagate Research SVP, said: "Putting trust and security functions directly in the storage device is a novel idea, but that is where the sensitive data resides. Implementing open, standards-based security solutions for storage devices will help ensure that system interoperability and manageability are greatly improved, from the individual laptop to the corporate data center." Seagate already has an encrypting drive.
Marcia Bencala, Hitachi GST's marketing and strategy VP, said: "Hitachi's Travelstar mobile hard drives support bulk data encryption today and we intend to incorporate the final Trusted Storage Specification as a vital part of our future-generation products."
The TCG has formed a Key Management Services subgroup, to provide a method to manage cryptographic keys.
Final TCG specifications will be published soon but companies could go ahead and implement based on the draft spec.
Subscribe to:
Posts (Atom)