This middleware will put the sessionid in every place that it might be needed, I mean, as a hidden input in every form, at the end of the document as a javascrit variable to allow the AJAX request use it and of course as a GET variable of the request.
To make it work correctly the MIDDLEWARE_CLASSES tuple must be in this order:
`
'CookielessSessionPreMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'CookielessSessionPosMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
`
Hope it work for someone else out there.
- middleware
- cookie
- cookieless
- less
This was a wild experiment, which appears to work!
One model holds all the data, from every object version ever to exist. The other model simply points to the latest object from its gigantic brother. All fields can be accessed transparently from the little model version, so the user need not know what is going on. Coincidently, Django model inheritance does exactly the same thing, so to keep things insanely simple, that's what we'll use:
class EmployeeHistory(FullHistory):
name = models.CharField(max_length=100)
class Employee(EmployeeHistory):
pass
That's it! Django admin can be used to administer the `Employee` and every version will be kept as its own `EmployeeHistory` object, these can of course all be browsed using the admin :-)
This is early days and just a proof of concept. I'd like to see how far I can go with this, handling `ForeignKey`s, `ManyToManyField`s, using custom managers and so on. It should all be straightforward, especially as the primary keys should be pretty static in the history objects...
*updated 3 August 2009 for Django 1.1 and working date_updated fields*
This is a bastardisation of a few of the Amazon s3 file uploader scripts that are around on the web. It's using Boto, but it's pretty easy to use the Amazon supplied S3 library they have for download at [their site](http://developer.amazonwebservices.com/connect/entry.jspa?externalID=134).
It's mostly based on [this](http://www.holovaty.com/blog/archive/2006/04/07/0927) and [this](http://www.davidcramer.net/code/112/writing-a-build-bot.html).
It's fairly limited in what it does (i didn't bother os.walking the directory structure), but I use it to quickly upload updated css or javascript. I'm sure it's a mess code wise, but it does the job.
This will first YUI compress the files, and then gzip them before uploading to s3. Hopefully someone might find this useful. It will also retain the path structure of the files in your MEDIA_ROOT directory.
To use it, set up your Amazon details, download the [YUI Compressor](http://developer.yahoo.com/yui/compressor/), and then enter the folder you wish to upload to s3, and basically run the script - python /path/to/s3_uploader.py