Before a vulnerability scanner can scan a website or web application, it must know its exact structure. To learn the structure, it must crawl the entire website or web application and find all possible entry points. For this purpose, Acunetix developed its own DeepScan technology that acts similarly to a browser and imitates actions that could be taken by a real user.
Deep Crawling with the Chromium Engine
The DeepScan technology is a DOM parser based on an improved Chromium engine. This engine enables DeepScan to emulate the way that the user uses the browser including virtual mouse movement and mouse clicks.
- You can thoroughly analyze web applications developed in Node.js, Ruby on Rails, and Java Frameworks including Java Server Faces (JSF), Spring, and Struts.
Crawling Protected Areas with the Login Sequence Recorder
To crawl and scan and areas of the web application that require authentication, the scanner needs to know how to log in and requires credentials. To make this possible, Acunetix uses the Login Sequence Recorder (LSR). With LSR, you can quickly and easily record a series of actions and/or restrictions that the scanner can replay to authenticate itself during a crawl and a scan. The Acunetix LSR supports a large number of authentication mechanisms including:
- Multi-step/custom authentication schemes
- Single Sign-On authentication
- CAPTCHAs and multi-factor authentication
Discovering API Endpoints with DeepScan
Most modern web applications are built on top of APIs. The same APIs are also accessed, for example, by mobile applications or directly used by third parties. If the API is accessed by a web application, DeepScan helps Acunetix map the endpoint structure.
- The crawler interacts with AJAX, SOAP/WSDL, SOAP/WCF, REST/WADL, XML, JSON, Google Web Toolkit (GWT), and CRUD operations.
- Although Swagger, WADL, and WSDL files can give a head start to the scanner, the crawler can automatically build the structure of endpoints and available calls with no need to provide additional information.