Pen testing in the Web 2.0 era

How is penetration testing coping with the brave new virtualised world of Web 2.0 with its new opportunities to breach security and compromise data? ProCheckUp director Richard Brain outlines the state of the art.

The increased market adoption of virtualised servers and interconnected web services (web 2.0), introduces new challenges when performing pen tests to uncover flaws and to create proof of concept attacks. Testing with no prior knowledge (black box) historically has provided a good foundation for a sound penetration test, though to be able now to detect and defeat the more prevalent advanced attacks, a more comprehensive system information and source code review is now required (white box testing).

Virtualisation

Server virtualisation is rapidly becoming the standard in the server environment. Driven by the release of the Windows 2008 server and Red Hat enterprise 5.x. OS, and the desire to fully utilise the power of the latest Xeon chipsets, virtual host machines running these operating systems are easily able to support four to eight virtual hosts.

Worm/viruses historically spread over network shares, exploiting new security flaws found within machines. Virtual machine sprawl, which is the uncontrolled creation and expansion of the number of virtual machines, can allow worms and viruses to spread throughout the data centre. As un-patched and insecurely configured hosted machines will be vulnerable to the same flaws as stand-alone operating systems and can become reservoirs of malicious agents if not properly managed.

Additionally as virtual machines have predictable hardware profiles, with similar virtual hardware shared between virtual machines, this similarity might be exploited by the future malware to spread more rapidly between machines as virtualisation becomes more widespread. BIOS level root kits are old news, and it should be expected in due course root kits which target virtual hardware like the keyboard controller will be released.

As the conflicker and sasser worms spread using hard drives, DVD and USB devices by using the auto-run feature, hosted virtual machines became infected from the host machine. The physical hosted drives when shared between hosted machines auto-ran and installed the worm on the hosted machine. Microsoft released a patch which effectively disabled auto run in Windows Server 2008 in Feb 2009.

Penetrating the virtual world

Penetration testing of virtual machines is no different from testing conventional hosts. Open ports are discovered and services running over these ports are tested for security flaws. Additionally virtual support software like WMI management might also be found running on virtual machines. Interacting manually with individual virtual machines proves that the patching is recent and an updated virus system is running. An additional problem is identifying offline virtual machines, and backup images (stored offline/online) may not be sufficiently patched before being exposed to a dangerous environment like the Internet. The backup images themselves might be infected with malware, which needs to be considered if an organisation had recovered in the past from a malware infection.

Hosting servers that host and manage multiple virtual machines (four or more), require more in-depth and focused penetration testing to ensure that no security flaws exist which might adversely affect the dependent hosted machines. Simple denial of service attacks might occur by killing the hosting machine; or by privilege escalation.

Web 2.0

In the past few years the Web has envolved from servers generating content, to a more responsive and dynamic mixture of client/server communications. At the same time JavaScript injection attacks have envolved from perceived session-stealing attacks using cross site scripting (XSS), to full exploitation frameworks which allow far more serious attacks.

Anton Rager demonstrated by XSS proxy the feasibility of JavaScript exploitation frameworks. Further frameworks have since been released like BeEf proxy, XSS shell and Blackframe. These frameworks allow for more serious attacks to occur like intercepting key presses made within the victim's browser. Penetration testers now have to check for the more dangerous XSS attacks, in various forms (reflective and persistent) with common character encoding or browser-specific variations to bypass the different input/output XSS filters. Matters become more complex if programming languages or file types are used which themselves are subject to common XSS attacks due to common misconfiguration or old insecure versions.

Services like Twitter, EBay, YouTube and LinkedIn which allow users to upload and modify their own content pose a number of problems when performing penetration tests if malicious JavaScript can be directly uploaded to the website, and confirm that the websites input and output filters can cope with the various behavioural nuances when the various web browsers/engines render web pages.

For instance to bypass input/out filtering of the JavaScript static word (used to run code), it is common to add a new line (represented by ) within the word making JavaScript becoming java script as Internet Explorer still processes words separated by linebreaks. Any uploaded files should be treated with suspicion, due to various published exploits due to rendering errors when processing malicious submitted files. This might directly affect the website under test if it previews the file content, or more commonly only affects the end user machines. For instance with GIF files, the comment area might be used to alter the flash cross domain policy, or if a GIF file is combined with a JAR (Java Archive) which is termed as GIFJAR to produce content which is then executed by the Java virtual machine. Other common GIF attacks attempt to create buffer overflows to run code on user machines, like the Mozilla Foundation GIF overflow vulnerabilities.

Submitted links can be used to attack flaws in software running on end user machines (eg Flash Player XSS) or directly attack other users of the website by a technique called cross site request forgery (CSRF). CSRF attacks typically occurs where the website uses long-lived persistent cookies fro authenticating its users, for instance if a website user visits a malicious submitted page which then submits a page request like delete user (normally via an image tag). The user's browser recognising that the page has associated persistent cookies submits the authentication cookie along with the submitted request, and the site carries out the deletion request believing it was submitted by the user (as the authentication cookies were submitted). The SAMY worm spread across Myspace using an XSS attack by bypassing mechanisms to prevent CSRF attacks to perform a CSRF, and using string concatenatio and character conversion to bypass XSS filters.

Many servers accept RSS (really simple syndication) news feeds which are forwarded to the servers' subscribers; the subscribers' web browser the render the information contained within the RSS file. RSS files use the XML file standard to transmit information, a problem occurs when an attacker is able to submit a malicious RSS file. In such an instance, the attacker might be able to perform a XXE (XML external entity) attack to read system files and perform other attacks on an RSS aggregator machine (news site), or exploit an XML parser security weakness within the subscribers' web browsers eventually to run system commands on subscriber machines. A recent example was the CVE-2009-0137 Safari RSS attack when a malicious-crafted news feed was potentially able to execute code on the client.

Penetration testers have to ensure that aggregator sites have processes and controls in place to ensure that un-trusted RSS feeds cannot be added, and that any code providing RSS feeds has sufficient filtering and malicious code detection so that un-patched subscriber machines do not execute any malicious embedded code. This is not straightforward, as some RSS feeds embed tags to make their content more interesting.

Paying up

Service providers like Paypal, Ebay and Amazon, remove the need to process card payments and run complex e-commerce environments from their users. The interlinking of these different services ensures that vulnerabilities in providers will affect their users; it is becoming common when performing a penetration test that the found flaw is 'downstream' from the site under test. There are also issues with data integrity, as the data is now distributed and shared to and from the different service providers (data might be lost, intercepted etc). The website under test might submit customer data to a service provider, by using a published API provided by the service provider. This historic/current API code might contain programming flaws, which allow other registered parties to retrieve customer details or to interfere with the processing of orders by exploiting the flaws within the API.

AJAX and asynchronous JavaScript have been adppted to speed up data transfer to the client by just sending and displaying the information which has changed (instead of resending a page). The increasing widespread adoption of AJAX increases the time needed to penetration tests, as the sent information has an innumerable number of variations when inspecting and trying to modify the data. The process of understanding the data is time-consuming as java libraries have to be inspected, with any security implications for the implementation understood and attacks created and simulated to carrying out an in-depth penetration test. Various frameworks exists like Google web Toolkit, SAJAX, XAJAX, or Microsoft.NET AJAX. Various formats are used to send data from custom streams, XML and JSON.


Facing the challenge

I hope this article has given an insight into some of the current challenges facing penetration testers. More time needs to be allocated to perform Web 2.0 penetration testing, particularly with penetration testing companies operating in an increasingly competitive environment, with market demand that web application tests to be performed to a budget with a year-on-year reduction of costs despite the inherent need to spend ore time. This regrettably all to brief overview of how penetration testers can find vulnerabilities, has hopefully aided administrators and information security managers in making their infrastructure more secure. Another concern for administrators and ISMs is the effectiveness of traditional IDS/IPS, and application level firewalls in detecting Web 2.0 attacks as such devices have been used as the traditional sticking plaster for insecure applications in the past.

The following article appears on Test Magazine. You can click here to read it in its original source.