Hi Eduardo, I have been performing testing on various software that handles HTTP requests using my HTTP fuzzer in conjunction with Google's Address Sanitizer[1] (ASAN). I am writing to you to share the results of testing on the Monkey web server as you are (presumably) the lead developer of the project. In summary, one important issue which allows an easy denial-of-service attack to be performed was found, as well as several memory mishandling issues. I have attached patches to fix most of these issues (*.patch). In addition, sample minimized HTTP requests that trigger the issues (*-request.txt) and the resulting diagnostic output from ASAN when Monkey is run on these requests (*-request-asan-output.txt) are attached where possible. The testing was performed with Monkey v1.5.1, but I have verified that the issues are still present on the current git master. In addition, the filenames and line numbers referenced below, as well as the patches, are based upon the current git master. As of writing, this is commit 295512e5. Please excuse me if my descriptions of the issues are overly verbose to someone like you who knows the server inside-out; I figure too much information is better than too little! - Issue #0: Custom error page FD leak (denial of service) When a custom error page is to be returned in response to a client request, a handle to the appropriate custom error content file is obtained via a call to open() (src/mk_request.c:839). This fd is stored in the fd_file member of the session_request structure. This is in contrast to the way regular response content files are opened. Here, mk_vhost_open() is used instead of calling open() directly. This function uses the per-vhost fd hashtable in an attempt to minimize fd usage and only opens a new fd if necessary. It also sets the vhost_fdt_{id,hash} members of the session_request. When the time comes to destroy the session_request with mk_request_free(), a call to mk_vhost_close() is made to (maybe, depending on reader count) close the session_request's fd_file. This call is made regardless of whether the fd was obtained via a call to mk_vhost_open() or instead by calling open() directly. mk_vhost_close(), expecting that the handle has only been obtained by use of mk_vhost_open(), searches the per-vhost fd hashtable for the session_request's vhost_fdt_hash. In the custom error page case, the hash is always 0 since it has been untouched since session_request memory allocation and zeroization (via memset()). It will (assuming the hashtable isn't full) always mistakenly find a "matching" hashchain element, since uninitialized elements have a hash of 0 (and a fd of -1, but this unchecked by mk_vhost_fdt_chain_lookup()). This element will have a reader count of 0, and so the decrement performed by mk_vhost_fdt_close() will cause an integer underflow (src/mk_vhost.c:254). As the count is now non-zero, the element will not be cleared and the fd will not be closed. Hence, the fd is leaked. An attacker can therefore simply perform requests that result in a custom error page being returned, such as 404s. After many such requests, the server process will hit the maximum number of open fds allowed (as given by `ulimit -n`). Any further requests that would normally need to open a fd in order to complete processing will fail (ie. normal valid requests for content will fail with 403 errors). This allows for a simple denial-of-service attack. Attached is a POC shell script (custom-error-fd-leak-poc.sh) that uses curl to repeatedly generate 404 responses until even normal requests start to 403. I am not sure of the best way to fix this; my first ideas are to simply use the normal per-vhost fd hashtable mechanism to obtain handles to custom error content files instead of open()ing them directly, and/or to make mk_vhost_fdt_chain_lookup() check that hashchain elements' fds are not -1 (ie., that they are not empty). I have hence not supplied any patches that attempt to fix this issue. - Issues #1, #2, #3 and #4: Various memory overread bugs These bugs are simple mistakes which lead to memory overreads. Depending on the current state of the server's heap, a crash (DoS) may occur, or unrelated memory will be referenced. Patches (0001-Request-set-correct-path-size-when-truncated.patch, 0002-Mimetype-don-t-iterate-off-the-start-of-the-filename.patch, 0003-Method-correctly-extract-content-length-value-from-b.patch and 0004-Request-don-t-search-off-the-end-of-the-body-buffer-.patch) are attached to fix these issues. Hopefully the commit descriptions and the commits themselves are enough to understand each of the issues. If not, please just ask for more information. - Issue #5: One-byte heap overflow This is a simple one-byte heap overflow caused by not including space for a null byte string terminator in the allocation of URL-decoded URLs' output buffers. A patch (0005-Utils-allocate-enough-space-to-include-the-null-term.patch) is included to fix this issue. If anything given here is unclear, please don't hesitate to ask for clarification! I can also help in getting CVEs assigned to any of these issues, however I personally feel that the only one deserving of one would be issue #0 (DoS). Please let me know how you'd like to proceed. Thank you for you time, and thanks for the Monkey project, - Matthew Daley [1] https://code.google.com/p/address-sanitizer/