Login

Tag "dump"

Snippet List

MongoDB data dump

This Django management command just dumps data from a given MongoDB collection into a JSON file. To get it working, just create a ``MONGODB_NAME`` variable in settings, holding the name of your Mongo database. This can be edited to fit more your needs. The snippet requires Pymongo, since it uses its ``bson`` module and the ``MongoClient``.

  • dump
  • fixtures
  • mongodb
Read More

Dump a model instance and related objects as a Python data structure

This utility makes a text dump of a model instance, including objects related by a forward or reverse foreign key. The result is a hierarchical data structure where * each instance is represented as a list of fields, * each field as a (<name>, <value>) tuple, * each <value> as a primitive type, a related object (as a list of fields), or a list of related objects. See the docstring for examples. We used this to make text dumps of parts of the database before and after running a batch job. The format was more useful than stock `dumpdata` output since all related data is included with each object. These dumps lend themselves particularly well for comparison with a visual diff tool like [Meld](http://meldmerge.org/).

  • serialize
  • dump
  • model
  • instance
Read More

django_bulk_save.py - defer saving of django models to bulk SQL commits

When called, this module dynamically alters the behaviour of model.save() on a list of models so that the SQL is returned and aggregated for a bulk commit later on. This is much faster than performing bulk writing operations using the standard model.save(). To use, simply save the code as django_bulk_save.py and replace this idiom: for m in model_list: # modify m ... m.save() # ouch with this one: from django_bulk_save import DeferredBucket deferred = DeferredBucket() for m in model_list: # modify m ... deferred.append(m) deferred.bulk_save() Notes: * - After performing a bulk_save(), the id's of the models do not get automatically updated, so code that depends on models having a pk (e.g. m2m assignments) will need to reload the objects via a queryset. * - post-save signal is not sent. see above. * - This code has not been thoroughly tested, and is not guaranteed (or recommended) for production use. * - It may stop working in newer django versions, or whenever django's model.save() related code gets updated in the future.

  • sql
  • tool
  • dump
  • save
  • db
  • bulk
  • util
Read More

Export Models

Warning: This python script is designed for Django 0.96. It exports data from models quite like the `dumpdata` command, and throws the data to the standard output. It fixes glitches with unicode/ascii characters. It looked like the 0.96 handles very badly unicode characters, unless you specify an argument that is not available via the command line. The simple usage is: $ python export_models.py -a <application1> [application2, application3...] As a plus, it allows you to export only one or several models inside your application, and not all of them: $ python export_models.py application1.MyModelStuff application1.MyOtherModel Of course, you can specify the output format (serializer) with the -f (--format) option. $ python export_models.py --format=xml application1.MyModel

  • tool
  • dump
  • serialization
  • export
  • db
  • database
Read More

Command to dump data as a python script

This creates a fixture in the form of a python script. Handles: 1. `ForeignKey` and `ManyToManyField`s (using python variables, not IDs) 2. Self-referencing `ForeignKey` (and M2M) fields 3. Sub-classed models 4. `ContentType` fields 5. Recursive references 6. `AutoField`s are excluded 7. Parent models are only included when no other child model links to it There are a few benefits to this: 1. edit script to create 1,000s of generated entries using `for` loops, python modules etc. 2. little drama with model evolution: foreign keys handled naturally without IDs, new and removed columns are ignored The [runscript command by poelzi](http://code.djangoproject.com/ticket/6243), complements this command very nicely! e.g. $ ./manage.py dumpscript appname > scripts/testdata.py $ ./manage.py reset appname $ ./manage.py runscript testdata

  • dump
  • manage.py
  • serialization
  • fixtures
  • migration
  • data
  • schema-evolution
  • management
  • commands
  • command
Read More

Variable inspect filter

This module has two template filters allowing you to dump any template variable. Special handling for object instances. Pretty output. Usage: {% load dumper %} ... <pre>{{ var|rawdump }}</pre> or {% load dumper %} ... {{ var2|dump }} How to install: As usual, put into `<your-proj>/<any-app>/templatetags/dumper.py`.

  • template
  • filter
  • dump
  • variable
  • inspect
Read More

Database migration and dump/load script

I once needed to convert a Django project from PostgreSQL to SQLite. At that time I was either unaware of manage.py dumpdata/loaddata or it they didn't yet exist. I asked for advice on the #django IRC channel where ubernostrum came up with this plan: simple process: 1) Select everything. 2) Pickle it. 3) Save to file. 4) Read file. 5) Unpickle. 6) Save to db. :) Or something like that. First I thought it was funny, but then started to think about it and it made perfect sense. And so dbpickle.py was born. I've used this script also for migrating schema changes to production databases. For migration you can write plugins to hook on dbpickle.py's object retrieval and saving. This way you can add/remove/rename fields of objects on the fly when loading a dumped database. It's also possible to populate new fields with default values or even values computed based on the object's other properties. A good way to use this is to create a database migration plugin for each schema change and use it with dbpickle.py to migrate the project. See also [original blog posting](http://akaihola.blogspot.com/2006/11/database-conversion-django-style.html) and [my usenet posting](http://groups.google.com/group/django-users/browse_thread/thread/6a4e9781d08ae815/c5c063a288483e07#c5c063a288483e07) wondering about the feasibility of this functionality with manage.py dumpdata/loaddata. See [trac site](http://trac.ambitone.com/ambidjangolib/browser/trunk/dbpickle/dbpickle.py) for version history.

  • dump
  • database
  • migration
  • load
  • pickle
Read More

db_dump.py - for dumpping and loading data from database

Description ============= This tool is used for dump and restore database of Django. And it can also support some simple situations for Model changes, so it can also be used in importing data after the migration of Model. It includes: dump and restore. Dump ====== Command Line: python db_dump.py [-svdh] [--settings] dump [applist] If applist is ignored,then it means that all app will be dumped. applist can be one or more app name. Description of options: * -s Output will displayed in console, default is writing into file * -v Display execution infomation, default is does not display * -d Directory of output, default is datadir in current directory. If the path is not existed, it'll be created automatically. * -h Display help information. * --settings settings model, default is settings.py in current directory. It can only support Python format for now. It'll create a standard python source file, for example: dump = {'table': 'tablename', 'records': [[...]], 'fields': [...]} table' is table name in database, records is all records of the table, it's a list of list, that is each record is a list. fields` is the fields name of the table. Load(Restore) ¶ Command Line: python db_dump.py [-svdrh] [--settings] load [applist] You can refer to the above description for same option. Others is: * -r Does not empty the table as loading the data, default is empty the table first then load the data Using this tool, you can not only restore the database, but also can deal with the simple changes of database. It can select the suitable field from the backup data file according to the changed Model automatically, and it can also deal with the default value define in Model, such as default parameter and auto_now and auto_now_add parameter for Date-like field. And you can even edit the backup data file manually, and add a default key for specify the default value for some fields, the basic format is: 'default':{'fieldname':('type', 'value')} default is a dict, the key will be the field name of the table, the value will be a two element tuple, and the first element of this tuple is type field, the second element is its value. Below is a description of type field: type value description 'value' real value using the value field directly 'reference' referred field name the value of this filed will use the value of referred field. It'll be used when the field name is changed 'date' 'now'|'yyyy-mm-dd' It's a date date type, if the value field is 'now', then the value be current time. Otherwise, it'll be a string, it's format is 'yyyy-mm-dd' 'datetime' 'now'|'yyyy-mm-dd hh:mm:ss' The same as above 'time' 'now'|'hh:mm:ss' The same as above The strategy of selection of default value of a field is: first, create a default value dict according the Model, then update it according the default key of backup data file. So you can see if there is a same definition of a field in both Model and backup data file, it'll use the one in backup data file. According the process of default value, this tool will suport these changes, such as: change of field name, add or remove field name, etc. So you can use this tool to finish some simple update work of database. But I don't give it too much test, and my situation is in sqlite3. So download and test are welcome, and I hope you can give me some improve advices. project site ============= http://code.google.com/p/db-dump/

  • tool
  • dump
Read More

8 snippets posted so far.