The Lost Update Problem - Part 2: Preventing Lost Updates in Web Applications
I introduced the lost update problem in the previous post "The lost Update Problem in the Context of Transactions". We have seen how to prevent lost updates by using transactions within the appropriate isolation level. I also mentioned, that the given approach will not work for web frameworks. This post explores why this is the case and discusses an approach to solve the problem.
The GET - MODIFY - UPDATE Cycle
We will have a look at the whole update cycle to get a common understanding of how things happen in a web application. We assume that a user is driving the whole process via a web browser. There are essentially tree events related to the users actions:
- The retrieval of data,
- the modification of the retrieved data, and
- the submission of the changes for updating the database.
The user actions and their correspondence to the operations performed by the machinery, as well as the flow of data, are displayed below.
We will now list the data transfers that happen between the browser, the application, and the data store more closely. These steps match quite accurately how things happen if the Ruby on Rails framework is used; however, differences compared to other web frameworks should be marginal. First, the Client sends a HTTP GET request (1) to edit a resource, e.g.
/accounts/1/edit, to the application. The application translates this into an SQL SELECT statement (3) and forwards it to the database. The application layer encodes the result into an accepted format (5) and returns it in the body of a response with status code 200 (6). The client submits the modified data with an HTTP POST request (PUT would be correct, but there exists a legacy with plain HTML) (8). The application layer will compare the submitted data with the stored information (9-13) and then store the changes with an SQL UPDATE command (14). Given that the update passed through, the web application will send a REDIRECT response pointing to the now updated resource.
Implications of restful HTTP
There is one very important observation to make: The initial retrieval and the update are two separate HTTP requests. As HTTP is a stateless protocol, the handling of the two requests in the application layer is (almost) totally independent. There are in particular no guarantees that the initial retrieval and the update are preformed by the same database session. Therefore, it is not possible to maintain a transaction that will ensure any consistency between those requests, including the prevention of lost updates.
Constraints and Scalability
As mentioned, the above picture holds true for the Ruby on Rails framework, and pretty much for any web framework that takes the constraints and thereby advantages of HTTP seriously. Those advantages boil down to scalability. There is theoretically no limit on the number of instances that serve requests, since the requests are handled in a total independent manner of each other (at least with respect of what the application layer concerns). This property is sometimes called a nothing shared architecture. It is, therefore, quite pointless to discuss the scalability of such web frameworks. The problems are elegantly pushed down to the database layer.
There are countless technical possibilities to circumvent the mentioned constraints, i.e. by maintaining some resources related state in the application layer. It is very dependant on the actual use case whether proceeding in this way is an ingenious move or just plain stupidity. The space between the two is small, and most approaches will more likely fall in the latter category.
Consequently, we will use the resource itself to store information that prevents accidental overwrites. We do so by introducing a field called updatetoken_ that is part of the state of a resource. This token is always carried along wherever a representation of the resource is passed. The token is then used to validate that any change is based on the most recent state when an update is submitted. This is done by comparing the token that is stored in the database to the token which is submitted with the update request. If the validation succeeds, the resource is updated, and the token will be replaced by a new one. If the resource has been updated in the meanwhile, the tokens will disagree, and the pending update will be declined.
There are essentially two places where the validation of the token can take place: in the application level at step 13 or in the database at step 15 (see the figure below). It has been subject of debates where such validations should be implemented. Zealots on either side will despise the one or the other variant verbally with more effort that the matter is worth. I will demonstrate how both variants are applied in a follow-up to this post.
Performing the validation in the application layer correctly, is actually more subtle than it appears at first sight. Using the database layer has the additional advantage that the code wont be as quickly outdated as if was implemented for the web framework of latest fashion. The default procedural language used in e.g. PostgreSQL, similar to the one in Oracles database, is quite old fashioned and verbose. It is also readable with only little knowledge about the particular language or programming itself. This is the path we will follow in this article.
The Validation Implemented as a Stored Procedure and a Trigger
The Validation Trigger
We need a way to intercept the update mechanism within the database. This is done with a so called trigger. It can be configured to fire either before or after the event, and for any combination of the insert, update or delete operations. We will hook into the execution before an update takes place by declaring a trigger called verifyupdate_token_ that calls into a procedure with the same name.
CREATE TRIGGER verify_update_token BEFORE UPDATE ON accounts FOR EACH ROW execute procedure validate_update_token();
The Validation Procedure
There is a certain kind of procedure that is meant do be called by triggers in PostgreSQL. They don't have arguments, and their return type is declared as trigger. What we actually return is the (possibly modified) row to be written to disk (or passed on to the next trigger). In the case of an update operation, the row to be replaced is available through the special variable
OLD, and the designated replacement is available through
We will use a positive integer as a token. If an update fires the trigger, we will validate if the update contains the negative of the token that is currently stored in the database. There is no conceptual reason why we insist of the negation but two practical ones:
NEWcontains all columns, also those that are not affected by the SQL update clause. An update clause that doesn't update the token at all would, therefore, pass unconditionally.
Persistence frameworks, including Active Record e.g., will usually filter any fields which have not changed. They would thus discard an unaltered token. However, all frameworks must keep a negated token, since it is different from the one that is currently stored in the database.
Returning to the flow of the procedure: we raise an exception if the old and the new token don't match, and we also return null. The exception ensures that the current transaction is marked for rollback. If, on the other hand, the validation passes the old token will be replaced by a new random number and the modified row is returned.
CREATE OR REPLACE FUNCTION validate_update_token() RETURNS trigger AS $$ DECLARE new_row RECORD; BEGIN IF OLD._update_token <> - NEW._update_token THEN RAISE 'update token has expired'; RETURN NULL; END IF; new_row := NEW; new_row._update_token := random_pos_int(); RETURN new_row; END $$ LANGUAGE PLPGSQL;
Helper and Initialization
There are yet a few pieces missing in our implementation:
- We must set an initial random token upon the insert statement, and
- random integer creation is not part of the PostgreSQL core library.
I provide solutions below without further ado.
CREATE OR REPLACE FUNCTION create_update_token() RETURNS trigger AS $$ DECLARE new_row RECORD; BEGIN new_row := NEW; new_row._update_token := random_pos_int(); RETURN new_row; END $$ LANGUAGE PLPGSQL; CREATE TRIGGER create_update_token BEFORE INSERT ON accounts FOR EACH ROW execute procedure create_update_token();
CREATE OR REPLACE FUNCTION random_pos_int() RETURNS integer AS $$ BEGIN RETURN floor(2147483646 * random() + 1)::int; END $$ LANGUAGE PLPGSQL;
The Last Catch
The precautions as given above will suffice for most cases. However, there is still a tiny possibility for ending up with a lost update. I discussed the matter of transactions in the first part The lost Update Problem in the Context of Transactions. Now, if two users submit their updates at the same time, it could be possible that both will pass! It is next to impossible to simulate this within the web browser; however, it is easily demonstrated from the psql command line shell in the same manner as the provided demonstrations of the previous post. We should set the isolation level to serializable to close this "security" hole.
The underlying ideas of our approach are in no way novel. The document oriented data store CouchDB uses a very similar mechanism that is enforced by default. Be aware that CouchDB is more relaxed in overwriting documents during replication between instances. Documents that have possibly been subject to lost updates are marked. However, it is up to the programmer to handle those, and they will be silently ignored otherwise (see Replication and conflict model in the CouchDB Wiki). Consistency in distributed environments is a subtle matter, and it is no less involved than transactions in relational datastores.
The transfer protocol in restful HTTP may not be used to keep any kind of state between requests. Consequently, it is not possible to employ transactions over several requests. However, the resource itself can be used to maintain state, which can help us to employ certain protection mechanisms. The interaction of these mechanisms with transactions can be subtle. In the end, we had to combine both transactions and state to ensure that the demanded functionality behaves as desired.