Thursday, October 11, 2007

Hypertext Transfer Protocol

Hypertext Transfer Protocol (HTTP) is a communications protocol used to transfer or convey information on the World Wide Web. Its original purpose was to provide a way to publish and retrieve HTML hypertext pages. Development of HTTP was coordinated by the W3C (World Wide Web Consortium) and the IETF (Internet Engineering Task Force), culminating in the publication of a series of RFCs, most notably RFC 2616 (June 1999), which defines HTTP/1.1, the version of HTTP in common use today.
HTTP is a request/response protocol between clients and servers. The client making an HTTP request - such as a web browser, spider, or other end-user tool - is referred to as the user agent. The responding server - which stores or creates resources such as HTML files and images - is called the origin server. In between the user agent and origin server may be several intermediaries, such as proxies, gateways, and tunnels. It is useful to remember that HTTP does not need to use TCP/IP or its supporting layers. Indeed HTTP can be "implemented on top of any other protocol on the Internet, or on other networks. HTTP only presumes a reliable transport; any protocol that provides such guarantees can be used."
An HTTP client initiates a request by establishing a Transmission Control Protocol (TCP) connection to a particular port on a host (port 80 by default; see List of TCP and UDP port numbers). An HTTP server listening on that port waits for the client to send a request message.
Upon receiving the request, the server sends back a status line, such as "HTTP/1.1 200 OK", and a message of its own, the body of which is perhaps the requested file, an error message, or some other information.
Resources to be accessed by HTTP are identified using Uniform Resource Identifiers (URIs) (or, more specifically, URLs) using the http: or https URI schemes.
Request message
The request message consists of the following:
Request line, such as GET /images/logo.gif HTTP/1.1, which requests the file logo.gif from the /images directory
Headers, such as Accept-Language: en
An empty line
An optional message body
The request line and headers must all end with CRLF (that is, a carriage return followed by a line feed). The empty line must consist of only CRLF and no other whitespace. In the HTTP/1.1 protocol, all headers except Host are optional.
Request methods
HTTP defines eight methods (sometimes referred to as "verbs") indicating the desired action to be performed on the identified resource.
HEAD
Asks for the response identical to the one that would correspond to a GET request, but without the response body. This is useful for retrieving meta-information written in response headers, without having to transport the entire content.
GET
Requests a representation of the specified resource. By far the most common method used on the Web today. Should not be used for operations that cause side-effects (using it for actions in web applications is a common misuse). See 'safe methods' below.
POST
Submits data to be processed (e.g. from an HTML form) to the identified resource. The data is included in the body of the request. This may result in the creation of a new resource or the updates of existing resources or both.
PUT
Uploads a representation of the specified resource.
DELETE
Deletes the specified resource.
TRACE
Echoes back the received request, so that a client can see what intermediate servers are adding or changing in the request.
OPTIONS
Returns the HTTP methods that the server supports. This can be used to check the functionality of a web server.
CONNECT
Converts the request connection to a transparent TCP/IP tunnel, usually to facilitate SSL-encrypted communication (HTTPS) through an unencrypted HTTP proxy.[1]
HTTP servers are supposed to implement at least the GET and HEAD methods and, whenever possible, also the OPTIONS method.
Safe methods
Some methods (e.g. HEAD or GET) are defined as safe, which means they are intended only for information retrieval and should not change the state of the server (in other words, they should not have side effects). Unsafe methods (such as POST, PUT and DELETE) should be displayed to the user in a special way, typically as buttons rather than links, thus making the user aware of possible obligations (such as a button that causes a financial transaction).
Despite the required safety of GET requests, in practice they can cause changes on the server. For example, a Web server may use the retrieval through a simple hyperlink to initiate deletion of a domain database record, thus causing a change of the server's state as a side-effect of a GET request. This is discouraged, because it can cause problems for Web caching, search engines and other automated agents, which can make unintended changes on the server. Another case is that a GET request may cause the server to create a cache space.
Idempotent methods and Web Applications
Methods GET, HEAD, PUT and DELETE are defined to be idempotent, meaning that multiple identical requests should have the same effect as a single request. Methods OPTIONS and TRACE, being safe, are inherently idempotent.
The RFC allows a user-agent, such as a browser to assume that any idempotent request can be retried without informing the user. This is done to improve the user experience when connecting to unresponsive or heavily-loaded web servers.
However, note that the idempotence is not assured by the protocol or web server. It is perfectly possible to write a web application in which (eg) a database insert or update is triggered by a GET request - this would be a very normal example of what the spec refers to as "a change in server state".
This misuse of GET can combine with the retry behaviour above to produce erroneous transactions - and for this reason GET should be avoided for anything transactional - and used, as intended, for document retrieval only.
HTTP versions
HTTP has evolved into multiple, mostly backwards-compatible protocol versions. RFC 2145 describes the use of HTTP version numbers. The client tells in the beginning of the request the version it uses, and the server uses the same or earlier version in the response.
0.9
Deprecated. Supports only one command, GET — which does not specify the HTTP version. Does not support headers. Since this version does not support POST, the client can't pass much information to the server.
HTTP/1.0 (May 1996)
This is the first protocol revision to specify its version in communications and is still in wide use, especially by proxy servers.
HTTP/1.1 (June 1999)[2][3]
Current version; persistent connections enabled by default and works well with proxies. Also supports request pipelining, allowing multiple requests to be sent at the same time, allowing the server to prepare for the workload and potentially transfer the requested resources more quickly to the client.
HTTP/1.2
The initial 1995 working drafts of the document PEP — an Extension Mechanism for HTTP (which proposed the Protocol Extension Protocol, abbreviated PEP) were prepared by the World Wide Web Consortium and submitted to the Internet Engineering Task Force. PEP was originally intended to become a distinguishing feature of HTTP/1.2.[4] In later PEP working drafts, however, the reference to HTTP/1.2 was removed. The experimental RFC 2774, HTTP Extension Framework, largely subsumed PEP. It was published in February 2000.
Status codes
In HTTP/1.0 and since, the first line of the HTTP response is called the status line and includes a numeric status code (such as "404") and a textual reason phrase (such as "Not Found"). The way the user agent handles the response primarily depends on the code and secondarily on the response headers. Custom status codes can be used since, if the user agent encounters a code it does not recognize, it can use the first digit of the code to determine the general class of the response.[5]
Also, the standard reason phrases are only recommendations and can be replaced with "local equivalents" at the web developer's discretion. If the status code indicated a problem, the user agent might display the reason phrase to the user to provide further information about the nature of the problem. The standard also allows the user agent to attempt to interpret the reason phrase, though this might be unwise since the standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable.
Persistent connections
In HTTP/0.9 and 1.0, the connection is closed after a single request/response pair. In HTTP/1.1 a keep-alive-mechanism was introduced, where a connection could be reused for more than one request.
Such persistent connections reduce lag perceptibly, because the client does not need to re-negotiate the TCP connection after the first request has been sent.
Version 1.1 of the protocol also introduced chunked transfer encoding to allow content on persistent connections to be streamed, rather than buffered, and HTTP pipelining, which allows clients to send some types of requests before the previous response has been received, further reducing lag.
HTTP session state
HTTP can occasionally pose problems for Web developers (Web Applications), because HTTP is stateless. The advantage of a stateless protocol is that hosts do not need to retain information about users between requests, but this forces the use of alternative methods for maintaining users' state, for example, when a host would like to customize content for a user who has visited before. The common method for solving this problem involves the use of sending and requesting cookies. Other methods include server side sessions, hidden variables (when current page is a form), and URL encoded parameters (such as /index.php?userid=3).
Secure HTTP
There are currently two methods of establishing a secure HTTP connection: the https URI scheme and the HTTP 1.1 Upgrade header, introduced by RFC 2817. Browser support for the Upgrade header is, however, nearly non-existent, hence the https URI scheme is still the dominant method of establishing a secure HTTP connection.
https URI scheme
https: is a URI scheme syntactically identical to the http: scheme used for normal HTTP connections, but which signals the browser to use an added encryption layer of SSL/TLS to protect the traffic. SSL is especially suited for HTTP since it can provide some protection even if only one side of the communication is authenticated. In the case of HTTP transactions over the Internet, typically, only the server side is authenticated.
HTTP 1.1 Upgrade header
HTTP 1.1 introduced support for the Upgrade header. In the exchange, the client begins by making a clear-text request, which is later upgraded to TLS. Either the client or the server may request (or demand) that the connection be upgraded. The most common usage is a clear-text request by the client followed by a server demand to upgrade the connection, which looks like this:
Client:GET /encrypted-area HTTP/1.1Host: www.example.com
Server:HTTP/1.1 426 Upgrade RequiredUpgrade: TLS/1.0, HTTP/1.1Connection: Upgrade
The server returns a 426 status-code because 400 level codes indicate a client failure (see List of HTTP status codes), which correctly alerts legacy clients that the failure was client-related.
The benefits of using this method for establishing a secure connection are:
that it removes messy and problematic redirection and URL rewriting on the server side,
it allows virtual hosting (single IP, multiple domain-names) of secured websites, and
it reduces user confusion by providing a single way to access a particular resource.
A weakness with this method is that the requirement for secure HTTP cannot be specified in the URI. In practice, the (untrusted) server will thus be responsible for enabling secure HTTP, not the (trusted) client.
Sample
Below is a sample conversation between an HTTP client and an HTTP server running on http://en.wikipedia.org/wiki/Example.com, port 80.
Client request (followed by a blank line, so that request ends with a double newline, each in the form of a carriage return followed by a line feed): GET /index.html HTTP/1.1 Host: www.example.com
The "Host" header distinguishes between various DNS names sharing a single IP address, allowing name-based virtual hosting. While optional in HTTP/1.0, it is mandatory in HTTP/1.1.
Server response (followed by a blank line and text of the requested page): HTTP/1.1 200 OK Date: Mon, 23 May 2005 22:38:34 GMT Server: Apache/1.3.27 (Unix) (Red-Hat/Linux) Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT Etag: "3f80f-1b6-3e1cb03b" Accept-Ranges: bytes Content-Length: 438 Connection: close Content-Type: text/html; charset=UTF-8
The ETag (entity tag) header is used to determine if the URL cached is identical to the requested URL on the server. Content-Type specifies the Internet media type of the data conveyed by the http message, while Content-Length indicates its length in bytes. The webserver publishes its ability to respond to requests for certain byte ranges of the document by setting the header Accept-Ranges: bytes. This is useful if the connection was interrupted before the data was completely transferred to the client.[6] With Connection: close it is stated, that the webserver will close the TCP connection immediately after the transfer of this package.

No comments: