Оcenka-BEL.com - безплатни онлайн тестове, пробни изпити, видео уроци и помощ при писане на задания от литературно и хуманитарно естество

WEB VULNERABILITIES EXPLAINED

Публикувано: 2017-08-14 07:58:00

Свързано изображение

Web Vulnerabilities Explained

Introduction. 5

For whom is this book intended?. 5

Web 101. 6

Information Gathering. 10

Search Engine Discovery. 10

Metafiles. 10

Testing error codes. 11

Known vulnerabilities. 12

Things typically retrieved during the Information Gathering phase. 13

Mitigation. 15

Clickjacking. 19

Forced Browsing. 22

Unvalidated Redirects and Forwards. 27

Credential Stuffing. 29

Path Traversal 30

Code Injection. 32

OS Command Injection. 34

Cross-Site Request Forgery. 39

Cross-Site Scripting (XSS). 43

Persistent XSS. 43

Reflected XSS. 43

Protecting yourself from XSS. 45

DOM-Based XSS. 47

Web Parameter Tampering. 49

Quiz. 50

SQL Injection. 51

Denial of Service. 58

Programming Vulnerabilities. 58

DoS against customer accounts. 61

ReDoS (Regular Expression Denial of Service). 61

Man-in-the-middle attack (MITM). 64

Session Hijacking. 65

Quiz. 68

Execution After Redirect (EAR). 69

Content Spoofing. 70

Mitigation and Prevention. 70

Practical Case. 70

Information Leakage. 73

Authentication vulnerabilities and flaws. 75

Empty string as password. 75

Hardcoding passwords. 75

Password aging. 76

Privileges. 76

Using the Referer header for authentication or authorization. 76

Password Security. 77

Password cracking. 78

Input Validation Flaws. 79

Regexes that are too allowing. 79

Client-side validation. 80

Buffer overflows. 80

Error handling vulnerabilities and flaws. 81

Returning within a finally block. 81

Lack of error handling. 82

Cryptographic vulnerabilities. 83

Weak or home-grown algorithms. 83

Insufficient randomness. 83

Other cryptographic issues. 84

General Security. 85

HTTPOnly cookies. 85

Update your website server, CMSs and other third-party software regularly. 88

Security misconfigurations. 88

Social Engineering. 90

Final Quiz. 96

How real are website security threats? (Statistics). 97

Quiz cheat sheet. 102

Quiz 1. 102

Quiz 2. 102

Quiz 3. 103

Further Reading. 105

References: 106

 

Introduction

In this book, we are going to present many diverse and important web vulnerabilities. We are going to include files with sample applications that have real vulnerabilities. We are also going to show code samples with the vulnerabilities where we will discuss common solutions to overcoming these vulnerabilities. The book also includes three interactive quizzes located at key points where you can test your understanding of the material. This book will present the web vulnerabilities by relying on the following languages: HTML, CSS, JavaScript, PHP and Node.js.

The author assumes some prior knowledge of web development and the language mentioned above but does not assume that the reader is proficient with the subject matter.

For whom is this book intended?

If you are a penetration tester or wish to become one then the book would most certainly be useful as it shows the key vulnerabilities out there with examples illustrating how to exploit them and how to patch them up should you be required to suggest countermeasures to existing weaknesses.

If you are a web developer, or aspire to become one, then this book may also be useful to you, as it shows vulnerabilities that are widespread along with code examples illustrating what kind of code leads to a particular vulnerability. You will learn how to fix this, which would help you write secure code.

If you are planning to read the book from a penetration testing perspective, you should be focusing more on the vulnerabilities and the ways they could be exploited. On the other hand, if you are a web developer you should pay more attention to the ways in which you can patch those vulnerabilities. Both perspectives are useful to see the big picture and do better at each of those tasks.

 

 

Web 101

If you are a user who employs websites a lot and for all kind of purposes – to buy goods, read the news, communicate with friends, or educate yourself, but you have never played with websites, then you might think that websites are exactly the same as programs that have some UI (User Interface) that you need to follow and that you cannot change the way the program behaves – then you are totally incorrect. This is a false belief because:

  1. Even when using browsers, users can change anything they want from the website’s front-end. This includes the HTML (structure of the page), the CSS (the presentation of the page) and the JavaScript (the behavior of the page).
  2. Users do not have to use browsers.
  3. Users can change the headers they are sending to your server. They can simulate their request coming from a particular browser, that a particular page has send them to a page on your website, and they can send whatever request method they wish without the assistance of your website’s UI or a browser.

Let’s say you have an HTML form which sets the request method to POST, asks the users for their username and password and shows them a submit button to click on when they are ready. Then hackers can take a look at that form, see its action attribute and the name attributes of the inputs in the form and launch thousands of requests to your API endpoint (the form’s action attribute) with the request method set exactly as in the form’s method attribute and any data they wish to brute-force user credentials (in most cases, trying different passwords).

<form action="" method="post">
    Username<input type="text" placeholder="Username" name="username" class="form-control">
    Password <input class="form-control" name="password" placeholder="Password" type="password">
    <div class="text-center">
        <input type="submit" value="Login" class="btn btn-lg btn-warning">
    div>
form>

Figure 1: A sample form.

You can see the code for that form by launching the Developer Tools and examining the HTML (the DOM) of a page that you have opened. An empty action attribute means the form will submit to the same page that you are currently on. In this form, we have two inputs which will be send to the same page in the form of username= and password= (their name attributes).

Therefore, we could use the knowledge we have gathered to launch a program like Hydra and brute force or perform dictionary attacks on that form in order to find legitimate usernames/passwords.

Let us not start right away with Hydra.

We can use curl to launch a request to any website with whatever data we want. We can set a GET, POST, PUT or DELETE request with ease.

  Most web apps use the POST method on their forms whenever a user undertakes an action – such as registering an account or logging in. That is useful to know if you plan to sniff the network traffic. It’s a good idea to filter the traffic that you intercepted to only POST requests.

Knowing what the request methods are used for is quite useful when testing an app for vulnerabilities. All of the four methods are typically used in a RESTful API.

  • GET is used to retrieve all, a set of or a single database entry. It retrieves data.
  • POST is typically used to send data to the server or create a new database entry.
  • PUT is typically used to update an existing database entry
  • DELETE is typically used to delete an entry from the database

Most web applications rely solely on GET and POST requests. GET sends data to the server through the URL. Here is a typical web application with two GET parameters: parameter q with the value of something and es_sm with the value of 93:  https://www.example.com/search?q=something&es_sm=93.

This means that browsers will save the URL you visited with all the GET data and that you can bookmark the URL. Since it will persist in the browser’s history, it is not a good idea to send a new data entity to the server using GET. The POST method sends the user’s data to the server through the request’s message body instead of the URL and so the data will not be that vulnerable – when it comes to shared computers and shoulder surfing.

Here is an example:

curl -X POST http://www.dimoff.biz/qa/admin/ -d "username=admin" -d "password=123"

Figure 2: A curl request that attempts to log in to a website.

Windows Powershell is both a command-line shell and a scripting language enabling us to do stuff we would ordinarily do in a programming language.

As you can see, it is not difficult to launch a POST request and attempt to login as admin with a password of 123. I hope that you can imagine what would happen if we loop over each password in a list with passwords and try to log in with that password. Of course, we could use Hydra or other password-cracking program but let us show you a basic example of doing that with Windows Powershell and curl.

 

 

$passwords = curl http://www.openwall.com/passwords/wordlists/password-2011.lst

First, we get a list of common passwords and store it in a variable.

$passwords =  $passwords[(11..$passwords.length)]        

Then, we remove the lines with comments from the passwords array.

$success = @()

for ($i = 0;$i -le $passwords.length - 1; $i++) {

      $result = curl -X POST http://www.example.com/qa/admin/ -d "username=admin" -d "password=$($passwords[$i])"
      If ($result.length -ne 22) {
            $success += ,($passwords[$i], $result);
             break;
      }

}

Finally, we loop over each password in the array, attempt to log in with a password from it and if the page that we get does not contain exactly 22 characters we save the password we have used and the page we retrieved.

curl is a command line tool and library for transferring data with URL syntax.

Of course, there are many different approaches that could be used depending on the targeted website. Here we know that each time we fail to log in successfully the page returns the same response – but that may differ on another page. For example, other pages could show different ad content, have some randomized content on different visits, or even some AI that generates different outcomes. In such situations, you can try to repeat the login attempt process until there is a text on the page indicating that you have entered wrong credentials.

 

Figure 3:  A scenario where our password list contains the correct credentials. You see we have an index with the returned page and another with the password we have used (it is not shown here).

You can see that it is possible to launch a basic dictionary attack in a few minutes right in your PowerShell without using any third-party tools and programs if you have basic programming skills.

Information Gathering

There are many places where you can start gathering information about your target. Information could include:

  1. Contact details and personal information for social engineering attacks
  2. Metafiles for the web application meant for crawlers and robots which could pinpoint forbidden directories and files in the website, amongst other things
  3. Examining the application entry/exit/handover points
  4. Analysis of the application’s error codes which may reveal sensitive or information that is not supposed to be shown which can indicate the technologies and products used by it so you can search for vulnerabilities online for the exact technology the app uses when commencing an attack
  5. Search Engine Discovery or Reconnaissance to find existing vulnerabilities of the application that may be published online.
  6. Mapping the application’s paths can help you identify each area in the website that should be investigated for vulnerabilities
  7. Getting the fingerprint of the web server to get acquainted of the version and type of web server you are dealing with so you can determine known vulnerabilities and exploit them

Search Engine Discovery

If you are thinking on launching a social engineering attack to gain access you can start with browsing the files that search engines have indexed.

You can search Google for specific file types/extensions located in a particular website. Here are some examples you can try out:

 

filetype:txt inurl:infosecinstitute.com

filetype:pdf inurl:infosecinstitute.com

filetype:doc inurl:infosecinstitute.com

You can search for emails in a specific website using something like (The asterisk * is a wildcard meaning any text can precede the text after it):

inurl:infosecinstitute.com *@infosecinstitute.com

Metafiles

You can check the application endpoints that the owners wanted to hide from search engines in order to determine what they value enough and what they are trying to protect from unwary eyes. One such file is robots.txt, which is usually placed in root of a domain or a subdomain.

Here are some live examples of checking the robots.txt of famous websites:

 

Figure 5: Robots.txt is disallowing indexing of wp-admin

In the above example, you can see the directory wp-admin located in the root of the subdomain is disallowed. Going to wp-admin will urge us to login as an administrator in the WordPress admin panel and it is a place where we could try to perform a brute-force or a dictionary attack to gather credentials. Now, we know the CMS the website is using so we can test vulnerabilities that exist in it and we know where we can try to crack passwords.

 

Figure 6: The robots.txt of a famous classifieds marketplace

In the above example, you can also see what the URL directing to the admin panel is: /adminpanel/ (if it starts with / then that page is located in the root of the website in which we are accessing the robots.txt file) amongst other crucial endpoints.

Testing error codes

Most websites disallow access to entire directories and only allow certain files. In PHP, this can be done through adding Options All –Indexes in your root’s .htaccess file. You can use this to see what kind of 403 Forbidden errors the server returns.

Many websites create a custom 404 page and they prohibit directory listing. However, they do not add a custom 403 HTTP status code page that can lead to information about the server being revealed.

 

Figure 7: Attempting to access a directory in a website causes it to return us the software behind the server: nginx/1.0.0

Known vulnerabilities

An important habit is to check the headers the server returns when you send an HTTP request. In cURL this could be done using the following command: curl -v websiteURLhere

If we check the headers of a large Bulgarian news website, we can see that the headers return the version of ASP.NET the website is using as well as the version of the ASP.NET MVC framework. This is a big security hole as attackers can gather information from the Internet as to what kind of vulnerabilities exist in that version of the software and launch successful attacks with ease.

 

Figure 8: Getting the response headers from websites can return crucial information about the software and libraries behind the web application.

Now, if we use https://asafaweb.com/ to scan that ASP.NET website for vulnerabilities, it gives us two problems instantaneously. Excessive headers (the exact issue we spoke about where the headers indicate sensitive information about the web platform), and Clickjacking (an attack in which the attackers embed the target website in a frame, which is usually transparent, and when users click on something on the current site they actually click on the page in the frame and probably take undesired actions in that targeted website)

When we try the same command on a different website: curl -v  we can see it also returns the software behind the server which is nginx along with its version.

 

Then, you can search the Internet for known vulnerabilities related to that web platform and potentially crack the website.

Starting with a search engine, you can enter the following URL:

https://www.google.bg/search?q=intext:deprecated:+mysql_query():+the+mysql+extension+is+deprecated+and+will+be+removed+in+the+future:+use+mysqli+or+pdo+instead

This search query will show you websites that contains the deprecated warning in their HTML (content/text). Besides showing you programming websites where people have asked how to fix the warning with that text this query will also reveal some websites which are using the outdated mysql_query function and you will be able to try further attacks on these websites to see if they are sanitizing input or not. In the best case, this error will only reveal information about what the path to the public web root is.  You can search for different types of errors that a web application may generate in specific programming languages in order to find a convenient target. That is why error handling should be regarded as an important task.

Things typically retrieved during the Information Gathering phase

In the first step of the information-gathering process, the attacker would try to get hold of data about the infrastructure-in-use and the people-in-use. The former could be used for discovering new, easier targets and shared resources while the latter could be used to launch brute-force attacks on the running services and initiate social engineering and spear-phishing attacks. It is also possible for the attackers to look for different kinds of information leaks, investigations and analyses. 

Fetching as much data as possible to see what could be retrieved is a necessity in penetration testing, but web developers could also perform this task to see whether their system is not revealing too much data about itself.

Your website has to be designed using good exception handling/coding standards. In case there are flaws in that field, attackers can type malformed queries to the application you are creating or examining and use the displayed errors to learn more about the backend behind the web application (such as table and column names, particular code snippets which could reveal flaws) or the technologies and products that the web application relies on.

The attacker might also try to get to know the technologies and products behind the web application through examining the headers in order to understand what types of vulnerabilities he could exploit.

The attacker can also check the mail headers of emails received from the target, which can reveal the application they were sent from, the IP address and the host of the sending entity.

 

Figure 9: You have to click on Show original to see the original email (with headers preserved) in Gmail

There is going to be data that can prove useful:

Received: from mx1.example.com (mx1.example.com. [2004:63c:210c::4])
Received-SPF: pass (google.com: domain of example@example.com designates 2001:67c:220c::4 as permitted sender) client-ip=2004:63c:210c::4;
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:31.0) Gecko/20100101 Thunderbird/31.3.0

You can see a sample IPv6 or IPv4 address, host, user agent of the sender.

They can check search engines for subdomains and link them to their respective IP addresses to broaden their attack possibilities with a tool such as theHarvester.

Email addresses can be automatically collected using tools like theHarvester.

Once the attackers have found emails, names or other information related to people involved with a target they can head over to the social media to find personal information about those people, which can help them in social engineering attacks. Such sites, like LinkedIn, contain data about the person’s current job, his past jobs, education, job description amongst other data, which could make for one highly targeted spear-phishing attack and point of entry.

Metadata (data about data) could be extracted from publicly available resources from your web application to gain more information about it.

Metadata in images (EXIF or Exchangeable Image File Format which includes the file extensions .jpg amongst others) can reveal the GPS coordinates of the people in the photo at the time when that photo was taken (where and when they were), the type of camera and its serial number with possibly all details about the camera. There are programs that can display that metadata such as Photoshop as well as quick methods available online to access that metadata (such as http://regex.info/exif.cgi).

 

Figure 10: A small portion of the EXIF metadata as returned by regex.info/exif.cgi

Documents found online (such as .pdf, .doc, .ppt, .xls) also contain metadata and a tool called Metagoofil could be used to view it. It can reveal machine usernames, worker names, server names, paths (by showing the path where the file was located at a given point), software versions and dates. Those bits of information could be used for targeted social engineering attacks, finding existing vulnerabilities in the used software, amongst other purposes.

Cyber-criminals can also run automated software that would check in which popular websites a user is registered to enable even more personal and complicated spear-phishing attacks, attempt credential stuffing/launch brute-force attacks, etc.

Mitigation

To combat metadata leaks from your organization’s public Microsoft Office files you can manually remove the metadata for the file that is going to be public. In Microsoft Office Word 2010, you have to click on File -> Check For Issues -> Inspect Document. In Microsoft Office Word 2013, you would go there from Info -> Check for Issues -> Inspect Document.

 

Figure 11: Inspecting the document for metadata

After you inspect the document, you can choose what you want to remove from it:

 

Figure 12: Metadata about the author was found in the document involved with this book.

Alternatively, you can use scripts or programs automatically to remove the metadata from such documents.

EXIF metadata from images can also be removed, even with online tools such as http://www.verexif.com/en/index.php.

You should also try to represent the information in your website in a way that would make it not that easy to parse by attackers.

You should not share information about the versions and products behind the web application in headers or error messages. Create your own custom error messages and remove HTTP headers that reveal that information. For example, when you use Node with Express.js it would automatically set an X-Powered-By header indicating that you are relying on Express, you can directly remove that header with something like:

res.removeHeader('X-Powered-By');

If you are coding in PHP, information about the Apache, nginx or another server you use along with PHP information may be displayed which is also removable: see Remove server info and PHP info from response header in Stack Overflow.

Also, try to remove uploaded files from the server or remove them as soon as they are no longer needed, and carefully watch what you are posting or sending online.

Clickjacking

Clickjacking is an attack in which the attacker uses transparent/opaque layers to trick the users that they are clicking on one button whereas they are actually clicking on a different button possibly on a different website.

You can see a simple example of clickjacking in the /files/clickjacking directory.

Firstly, we create an iframe pointing to an article in a website for programming articles. Then we add a div with the class of box that says iPhone. Afterwards, it is just a matter of some basic CSS.

<iframe src="http://www.phpgang.com/useful-html5-features-part-3_1840.html" frameborder="0">iframe>

<h1>Click on the pink box to win an iPhoneh1>

<div class="box">

    iPhone

div>
iframe {



    opacity: 0;

    height: 900px;

    width: 1000px;

    overflow: scroll;

}

We set the opacity of the iframe with the article to 0 so it would not be visible but it will still be there.

Finally, we add an absolute position to the box that the user would actually be seeing and position it below whatever we want the user to click on in the invisible website in the iframe. Finally, we add z-index: -1; to indicate that when the user actually thinks he is clicking on the pink box, he will be clicking on the iframe because the pink box has a negative z-index.

.box {

    width: 44px;

    height: 18px;

    position: absolute;

    top: 339px;

    left:55px;

    background-color: pink;

    border: 1px solid crimson;

    cursor: pointer;

    z-index: -1;





}

Oops, we have successfully created a page in which when the user clicks on the pink box he would automatically like an article of mine.

Here is how the page looks if the frame is semi-transparent:

Figure 13: Do you see how we have placed the pink box on top of the like button?

Figure 14: And here is how it looks after the iframe is fully transparent

You can see that if you hold on the mouse for a while you will see a Like popup. Add some text that tells the user that he is going to win an iPhone and start gathering likes. I am kidding, of course, don’t do it.

To prevent such attacks, you should set in your response headers X-Frame-Options to either deny, SAMEORIGIN or whitelist only particular websites through ALLOW-FROM .

This is a form of UI redressing that penetration testers should ensure is not possible and web developers remedy through the use of the X-Frame-Options header.

Forced Browsing

Forced browsing is a vulnerability in which the attacker can access resources that are not linked to by the application but are still accessible if you know how to find them. For example, you can just change the URL in some predictable manner and see a resource of another person. This technique allows you to brute-force different combinations and come up with resources that you are interested in.

For example, I created a note in notes.io and got a short link. I could see that the link to my note contained 4 characters – both uppercase and lowercase allowed so I figured I could just brute-force other people’s notes.

 

Figure 15: A note I created in a site designed for taking notes.

 

Figure 16: A random note that I encountered when brute forcing

This is just a simple example, as notes usually do not contain the most sensitive data but it shows how you can brute-force different websites to access resources that are not linked to across the particular website.

Sometimes, what you have to change is even easier – there would just be a number that you would have to keep incrementing/decrementing or there would be a name you would have to guess (there are many common names out there)

Automated directory surfing is possible. The automated program can open different directories and if they exist and are accessible – it can record that so that the attacker can check them out later. The first thing that you need to do is prevent users from browsing directories and having a listing with all files/subdirectories that exist there. In PHP, you can do that by creating an .htaccess file in your web root and add the following line to it:

Options All –Indexes

This would allow people only to browse files or directories in case they have a default file to run (index). Node.js does not provide any default directory listing functionality so you would be okay if you are working with Node.

The next thing that you have to do is create an authorization mechanism that checks if the user has access to the file he is requesting and serves him that resource only if he is authorized. You can use an arbitrary database – such as MySQL, SQLite, MongoDB to store the users and their authorization level (the set of resources they can access). When a user is logged in and has an identifier in a session – you can check his authorization level and redirect him with 403 (forbidden) status code if he is not entitled to access the content or just notify him that he cannot access that particular resource. Otherwise, you can let him see the resource.

If you are penetration tester, you should test how easy it is to brute-force other values by automating requests with different characters involved with the GET parameter. Web developers should either apply authorization schemes or make sure their parameters are not predictable. Putting a GET parameter called something like id, which contains a sequential integer, is one of the most predictable techniques out there. All your information could be scraped and used by competitors or malicious people – take the AT&T email security breach, which is an example of such a flaw.

Now, let us examine a simplified authorization mechanism that will show users only the resources in a particular folder they are entitled to access and will not allow the users to download files they do not have access to in PHP.

When the user first enters the website, he is prompted to log in:

 

Figure 17: The user has to authenticate himself to access resources

Abraham has the highest access possible (two or TS for top-secret) so he can view all files in the confidential folder. Access to files, in this fictional case, is determined by comparing the clearance the user has with the prepended metadata to the filename (none requires no special access and is used as clearance level 0 in our code, S requires access to secret documents and corresponds to clearance level 1, while TS corresponds to clearance level 2. If a person has a TS (2) clearance level then he can access the resources of all clearance levels below that. 

 

Figure 18: A fictitious user has access to all resources

Now, our next user (John) has a clearance level of one and he can access secret files but not top secret files.

 

Figure 19: The directory listing shows only the files the user has access to

If John tries to access directly a top-secret file he will not have any luck.

 

Figure 20: The user cannot access top secret files even if he knows their name

The user model is a simple class with a couple of methods.

We have a private method that fetches and returns all users (to simulate a database query)

private function allUsers() {
    $users = ['John' =>
        ['name' => 'John', 'password' => 'tunafish', 'full_name' => 'John Kenneth Levine', 'clearance' => 1],
        'Abraham' =>
            ['name' => 'Abraham', 'password' => 'smokedhering', 'full_name' => 'Abraham Schulz Lincolk', 'clearance' => 2],
        'Ivan' =>
            ['name' => 'Ivan', 'password' => 'smokedsalmon', 'full_name' => 'Ivan Turtle Dimov', 'clearance' => 0]
    ];
    return $users;

}

We have a public method which returns true if the user’s access is greater or equal to the access required by the file (as stipulated in its filename):

public function isAuthorized($name, $filename) {
    $clearances = ['none' => 0,'S' => 1, 'TS' => 2];
    return $this->allUsers()[$name]['clearance'] >= $clearances[explode('_', $filename)[0]];
}

Finally, we have a public method that logs the user in so that he can access files while his browser session is active. The method returns true if it successfully logged the user in and false if some of the given credentials were wrong:

public function logIn($password) {
    $users = $this->allUsers();
    $name = $this->user;
    if (array_key_exists($name, $users) && $users[$name]['password'] === $password ) {
        $_SESSION['user'] = $name;
        $_SESSION['clearance'] = $users[$name]['clearance'];
        return true;
    }
    return false;
}

For the custom directory listing, we have a simple helper function, which reads all files from a specified directory (in our case – assets) and adds a link to the file if the user has access to it (using the isAuthorized method):

function readDirectory($dir)
{
    global $user;
    $thelist = "";
    if ($handle = opendir($dir)) {
        while (false !== ($file = readdir($handle))) {

            if (preg_match('/.*[A-Za-z]+.*/', $file) && $user->isAuthorized($_SESSION['user'], $file)) {
                $thelist .= '' . $file . '
';
            }
        }
        closedir($handle);
    }
    ?>
    <p>Files you can access:p>
    <p>p>

}

We also have another .htaccess rule which redirects users trying to access a file within the assets folder to index.php?file=FILENAME so that we can determine on our own whether to enable them to access the file:


    Options +FollowSymLinks
    RewriteEngine On
    RewriteCond %{REQUEST_FILENAME} -f
    RewriteCond %{REQUEST_FILENAME} assets
    RewriteRule ^(.*)$ index.php?file=$1 [QSA,L]


 

Finally, if a user is trying to access a file, (the GET parameter file is set) and if he is logged in we check if he has the necessary clearance and either force a download of the file or tell the user he is no entitled to see what is there.

if (isset($_GET['file']) && isset($_SESSION['user'])) {

    $file = explode("/",$_GET['file'])[1];
    if ($user->isAuthorized($user->user, $file)) {
        if (!is_dir($_GET['file'])) {
            header("Content-disposition: attachment; filename=\"$file\"");
            echo readfile($_GET['file']);
        }

    }
    echo "You are not authorized to access: " . $file;
    die();
}

This is just an example of how you can prevent forced browsing. There are myriads of ways in which you can implement authorization mechanisms but you should take into account that disabling directory listings is often not sufficient for protection against forced browsing (depending on what kind of data is stored in your web root.

Unvalidated Redirects and Forwards

If you have a web application that takes a URL as a parameter and redirects to it without proper validation, then an attacker can give a link to your website. However, that link would actually redirect to another website, which may be malicious in terms of prompting users to install malware or it could be a phishing website looking exactly as yours so the attacker can get the user’s credentials or similar sensitive data.

Here is an example:

http://www.example.com/out?redirect=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FADD

An unwary user may not see that this will actually take him to http://www.en.wikipedia.org/wiki/ADD instead of the given website and attackers can exploit this.

To remedy such a situation avoid redirects and forwards. If they cannot be avoided, do not use user parameters (especially GET parameters which are located in the URL) when determining the destination website. For example, this can be done in PHP simply by adding a location header before a response is sent to the user and you can hardcode the values depending on your app’s logic redirection requirements:

PHP

 

 

header("Location: http://example.com/login.php");

Node.js with Express

 

 

res.redirect("http://resources.infosecinstitute.com");

Node.js

 

 

response.writeHead(302, {

    'Location': 'your/404/path.html'

    //add other headers here...

});

response.end();

If both are impossible, then you have to ensure the given value is valid and authorized for that particular user. 

Particular measures you can implement immediately are:

  1. Create a whitelist of trustworthy websites for which user redirection will be allowed (like an array or database table with hosts)
  2. Before redirecting, inform the user to which website they are going to be redirected to and allow them to confirm their intention of going there. If they do not confirm – do not redirect them.

Here is a simplified example of how these steps can be implemented:

$allowedHosts = ['example.com', 'example2.com', 'samplesite.com','samplesite2.com'];

if (isset($_GET['redirect'])) {

    if (in_array($_GET['redirect'], $allowedHosts)) {

        createRedirectConfirmation();

    }

}

Firstly, we create an array of the hosts that are allowed. We check if the redirect GET parameter is contained within our array of allowed hosts (we do not need the http:// protocol in the GET parameter at this point). If it is contained, we call a function that will print the necessary warning and ask the user to confirm or go back to our website.


function createRedirectConfirmation() {

    ?>

    <h3>You are about to leave our site to go to .h3>

    <p>If you intended this action please confirm, otherwise it is safe to go back to our websitep>

    <div>

        <a style="margin:10px;" href=""    >Confirm Redirecta>



        <a onclick="history.back(-1)" href="JavaScript:void(0)">Go Backa>

    div>





The function contains mostly HTML. It tells the user to which website he is going to be redirect to and notifies him that he can either confirm or go back to our website. At this point, we are adding the complete URL as an anchor and if the user confirms he will go to the external website. Otherwise, if he clicks on Go Back JavaScript’s history.back method is called which will return the user to the previous page (through the argument -1).

If you are a penetration tester, you would have to check the redirection scheme of the examined website and whether it could be exploited and web developers should aim to create a notification and confirmation options before proceeding with the redirect.

Credential Stuffing

This attack occurs when there are already leaked accounts and passwords in the Internet or the attacker has acquired them himself from a website he managed to crack and get account/password combinations from that website’s database. Credential stuffing or account takeover is the act of using stolen credentials to try to log in to different websites with them and completely control that account. Usually, the attacker would use an account checker that will promptly test many websites if the credentials are valid there. Web automation toolkits such as PhantomJS could be used to create such account checkers and powerful ones cost as little as 100$.  Typically, successful logins are between 0.1% and 0.2% of the total login attempts.

There is no proper way to prevent such an attack as attackers can gain credentials from a third-party website and use them to login as users in your website. One thing you can do is provide some sort of UI informing the users that using the same password in many websites is dangerous when they are in the registration process.

Another thing you can do to mitigate such an attack is look out for credential leaks of third-party websites, obtain the data that leaked and determine whether the password associated with each leaked email address in the third-party website is the same as the password in your website. If so, you can lock their account, force them to reset their passwords or undertake any other action that can help users. Facebook has been following this procedure as you can read in Amit Chowdhry's article for Forbes

Path Traversal

This vulnerability can be seen in websites that receive input from the user and use it in a path to access files in the file system. It allows the attacker to access files and/or directories that are restricted or are designed not to be seen by website visitors.

You can see the example in the files/path-traversal folder

If we have some anchors t

-->