Welcome to pypo’s documentation!¶
Configuration¶
You can overwrite all default settings either directly in pypo/settings.py or you create the file pypo/settings_local.py which is imported in the pypo/settings.py and therefore can overwrite all settings.
SECRET_KEY¶
Like in all django application, you have to set a unique secret key. Django SECRET_KEY documentation
DEBUG, TEMPLATE_DEBUG, CRISPY_FAIL_SILENTLY¶
To enable or disable debugging (crispy is a form component)
ALLOWED_HOSTS¶
A list of hostnames. Django ALLOWED_HOSTS documentation
ADMINS¶
A list of tuples (“name”, “email”) of admins
STATIC_ROOT¶
Absolute path where your static file are collected to when you call ./manage.py collectstatic
STATIC_URL¶
Url where those files are available
DATABASES¶
You database config. Pypo is tested with PostgreSQL, but any django supported DB should be fine. Django DATABASES documentation
HAYSTACK_CONNECTIONS¶
If you want to use something else than Whoosh (a pure python search index), you can configure the search backend here. Django Haystack documentation It is recommended to switch to Elasticsearch for larger datasets:
- HAYSTACK_CONNECTIONS = {
- ‘default’: {
- ‘ENGINE’: ‘haystack.backends.elasticsearch_backend.ElasticsearchSearchEngine’, ‘URL’: ‘http://127.0.0.1:9200/‘, ‘INDEX_NAME’: ‘pypo’,
},
}
Readme Module¶
Everything except for configuration.
Models¶
- class readme.models.Item(*args, **kwargs)[source]¶
Entry in the read-it-later-list
- created = None¶
param created Creating date of the item
- fetch_article()[source]¶
Fetches a title and a readable_article for the current url. It uses the scrapers module for this and only downloads the content.
- owner¶
param owner Owning user
- readable_article = None¶
param readable_article Processed content of the url
- safe_article = None¶
param safe_article Escaped and stripped of tags
param tags User assigned tags
- title = None¶
param title Page title
- url = None¶
param url Page url
Scrapers¶
- exception readme.scrapers.ParserException[source]¶
Generic exception for parsers/scrapers. Indicates that a scraper cannot succeed.
- readme.scrapers.domain_parser(domain)[source]¶
Decorator to register a domain specific parser
Parameters: domain – String Returns: function
- readme.scrapers.parse(item, content_type, text=None, content=None)[source]¶
Scrape info from an item
Parameters: - content_type – mime type
- text – unicode text
- content – byte string
- item – Item
About Pypo¶
Pypo is a self hosted bookmarking service like Pocket. There also is a rudimentary android application and firefox extension to add and view the bookmarks.
It’s main components are built with:
- Python 3
- Postgresql
- Django
- readability-lxml
- Whoosh
- django-haystack
- django-taggit
- tld
- South
- requests
- djangorestframework
- py.test
- bleach
Documentation¶
Full documentation can be found at readthedocs
Features¶
- Adding links and fetch their summaries and titles
- Links can have multiple tags
- Search by title, url and tags
- Filter by tags
Installation¶
- Create a virtualenv and
$ pip install -r requirements.txt
$ pip install -e .
- Setup a postgresql db
- You can overwrite the default settings by creating a settings_local.py next to pypo/settings.py . Do not directly edit the settings.py.
- Install js modules with bower
$ npm install -g bower
$ bower install
- Install yuglify for js and css minifiy
$ npm install -g yuglify
- Setup the database
$ ./manage.py syncdb
$ ./manage.py migrate
- Add a superuser
$ ./manage.py createsuperuser
- Host the application, see Deploying Django with WSGI
- Create normal users with the admin interface /admin
Deploying¶
There is a fab file you can customize to you liking. It creates a virtualenv, sets up the directory structure and checks your current local commit out on the target machine.
Requirements status¶
License¶
This project is licensed under the terms of the Apache License version 2. See COPYING.txt for details.