GeoNode Training Documentation¶
Welcome to the GeoNode Training Documentation.
From here you can browse the documentation relative to the GeoNode Training Documentation.
Hint
An Ubuntu 14.04 based Virtual Machine, containing the base packages and data for the execution of the workshop exercises, is available for download here The Virtual Machine can be launched through VirtualBox or VMWare Player binaries.
The users to be used with the Virtual Machine are:
geo/geo # principal user whith suoders privileges
geonode/geo # /home/geonode is the location where you can find sources, binaries and training data
postgres/postgres # system user for the management of the DBMS
The Documentation is divided in six main sections:
Hint
The whole documentation for offline reading as a ZIP archive can be donwloaded here
GeoNode Overview & Reference¶
Welcome to the GeoNode Training Overview & Reference documentation vlatest.
This module guides the user to an overview of GeoNode and its main components.
At the end of this section you will have a clear view of what GeoNode is and can do. You will be able also to use the GeoNode main functionalities and understand some of the basic concepts of the system infrastructure.
Users’ Features¶
Open Source Geospatial Content Management System
GeoNode is a web-based application and platform for developing geospatial information systems (GIS) and for deploying spatial data infrastructures (SDI).
What GeoNode can be used for… GeoNode Demo (admin/admin)
Spatial Data Discovery¶
GeoNode allows users to browse and search for geospatial data. By combining collaboration found in social networks with specialized geospatial tools, GeoNode makes it easy to explore, process, style, and share maps and geospatial data. Spatial datasets can be imported and shared, all through a non-technical user interface.
Features include:
- Powerful spatial search engine
- Federated OGC services
- Metadata catalogue
Import and Manage¶
GeoNode allows users to upload and share geospatial data, securely. GeoNode makes it easy to upload and manage geospatial data on the web. Any user can upload and make content available via standard OGC protocols such as Web Map Service (WMS) and Web Feature Service (WFS). Data is available for browsing, searching, styling, and processing to generate maps which can be shared publicly or restricted to specific users only. Supported upload formats include ShapeFile, GeoTIFF, KML and CSV. In addition, it is possible to connect to existing external spatial databases and services.
Features include:
- Publish raster, vector, and tabular data
- Manage metadata and associated documents
- Securely or publicly share data
- Versioned geospatial data editor
Interactive Mapping¶
GeoNode allows users to create and share interactive web maps. GeoNode comes with helpful cartography tools for styling and composing maps graphically. These tools make it easy for anyone to assemble a web-based mapping application with functionality traditionally found in desktop GIS applications. Users can gain enhanced interactivity with GIS-specific tools such as querying and measuring.
Features include:
- GeoExplorer GIS client
- Graphical style editor
- Create multi-layer interactive maps
- Share and embed maps in web pages
Introduction¶
This section introduces the GeoNode GUI and functionalities through a step-by-step workshop.
At the end of this module the users will be familiar with the GeoNode default GUI and objects.
A Tour Of GeoNode¶
In order to get started, let’s look at the GeoNode interface and get a feel for how to navigate around it.
The GeoNode web interface is the primary method of interacting with GeoNode as a user. From this interface, one can view and modify existing spatial layers and maps, as well as find information on other GeoNode users.
Without being logged in, you are limited to read-only access of public layers.
Navigate to your GeoNode instance (online demo available here):
Welcome page
This page shows a variety of information about the current GeoNode instance. At the top of the page is a toolbar showing quick links to view layers, maps, documents (metadata), people, and a search field. Below this is a listing of recently updated layers, including abstract, owner, rating, and download button (if available).
Click Explore button and choose Preview. Table data could be visualized as: Grid, Graph or Map.
Find a layer and click on its name.
A layer viewing page will display, with the layer itself superimposed on a hosted base layer (in this case MapQuest OpenStreetMap). Explore this page, noting the various options available to you.
Now click the Maps link in the tool bar to go to the Explore Maps page.
Click the Search link in the toolbar to bring up the Search page.
Now that you are familiar with the basic interface, the next step is to create your own account so you manage some GeoNode resources of your own.
GeoNode Quickstart¶
Open Source Geospatial Content Management System
GeoNode is a web-based application and platform for developing geospatial information systems (GIS) and for deploying spatial data infrastructures (SDI).
In this Quickstart guide you will learn the following:
- to register a new account to get started
- add a new layer
- create a map using your new layer
- share your map with others
Start GeoNode on your Live DVD or local VM and redirect your browser at http://localhost:8000/ (this is the default port). The page will look like shown in the image below.
1. Register a new account¶
From the interface shown above, one can view and modify existing spatial layers and maps, as well as find information on other GeoNode users. But, without being logged in, you are limited to read-only access of public layers. In order to create a map and add layers to it, you have to have create an account first.
From any page in the web interface, you will see a Sign in link. Click that link, and in the dialog that displays, click the Register now link.
On the next page, fill out the form. Enter a user name and password in the fields. Also, enter your email address for verification.
You will be returned to the welcome page. An email will be sent confirming that you have signed up. While you are now logged in, you will need to confirm your account. Navigate to the link that was sent in the email.
By clicking Confirm you will be returned to the homepage. Now you’ve registered an account, you are able to add layers to it as well as create maps and share those with other users.
2. Add a new layer¶
Layers are a published resource representing a raster or vector spatial data source. Layers also can be associated with metadata, ratings, and comments.
To add a layer to your account, navigate to the welcome page. There the following toolbar can be seen:
By clicking the Layers link you will be brought to the Layers menu where a new subtoolbar can be seen. This toolbar allows you to Explore, Search and Upload layers.
Now click Upload Layers and you’ll see the upload form.
You have two possibilities to add your files. You can either do that by using drag & drop or you choose to browse them. Be aware that you have to upload a complete set of files, consisting of a shp, a prj, a dbf and a shx file. If one of them is missing, GeoNode will warn you before you upload them.
You shold now be able to see all the files you want to upload.
GeoNode has the ability to restrict who can view, edit, and manage layers. On the right side of the page you can see the Permission section, where you can limit the access on your layer. Under Who can view and download this data, select Any registered user. This will ensure that Anonymous view access is disabled. In the same area, under Who can edit this data, select your username. This will ensure that Only You are able to edit the data in the layer.
To upload data, click the Upload button at the bottom.
3. Create a new map¶
The next step for you is to create a map and add the newly created layers to this map.
Click the Maps link on the top toolbar. This will bring up the list of maps.
Currently, there aren’t any maps here. To add one click the Create a New Map button and a map composition interface will display.
Click on the New Layers button and select Add Layers.
![]()
Add Layers
Now you should be able to see all the availabel layers. In your case, this should only be the ones you’ve added before (San Andreas?).
Select all of the layers by clicking the top entry and Shift-clicking the bottom one. Click Add Layers to add them all to the map.
![]()
Add Layers
The layers will be added to the map. Click Done (right next to Add Layers at the bottom) to return to the main layers list.
To save the map click on the Map button in the toolbar, and select Save Map.
![]()
Save Map
Enter a title and abstract for your map.
![]()
Edit Map Metadata
Click Save. Notice that the link on the top right of the page changed to reflect the map’s name.
![]()
Save Map
This link contains a permalink to your map. If you open this link in a new window, your map will appear exactly as it was saved.
To be continued¶
Now you’ve gotten a quick insight in the possibilities of GeoNode. To learn more about GeoNode and its features, visit the official webpage www.geonode.org.
Stay in touch with the GeoNode community through the #geonode IRC channel using http://webchat.freenode.net/ or by asking your question in our google group!
Reference Doc¶
In this section, you will find information about each and every component of GeoNode, for example GeoServer, GeoNode Settings, Security, etc.
The Big Picture¶
Architecture¶

GeoNode Component Architecture
GeoNode core is based on DJango web framework with few more dependencies necessary for the communication with the geospatial servers (GeoServer, pyCSW)
On the left side you can see the list of Entities defined in GeoNode and managed by the DJango ORM framework. Those objects will be detailed in a future section.
On the right side the list of Services available allowing GeoNode to communicate with the social world.
The GeoNode catalog is strictly connected to the GeoServer one (see the bottom of the figure). The geospatial dataset and the OGC Services are implemented and managed by GeoServer. GeoNode acts as a broker for the geospatial layers, adding metdata information and tools that make easier the management, cataloging, mapping and searching of the datasets.
Thanks to the ORM framework and the auxiliary Python libraries, GeoNode is constantly aligned with the GeoServer catalog. An ad-hoc security module allows the two modules to strictly interact and share security and permissions rules.
In the advanced sections of this documentation we will go through GeoNode commands allowing administrators to re-sync the catalogs and keep them consistently aligned.
Django Architecture¶
GeoNode is based on Django which is a high level Python web development framework that encourages rapid development and clean pragmatic design. Django is based on the Model View Controller (MVC) architecture pattern, and as such, GeoNode models layers, maps and other modules with Django’s Model module and and these models are used via Django’s ORM in views which contain the business logic of the GeoNode application and are used to drive HTML templates to display the web pages within the application.
Django explained with model/view/controller (MVC)¶
- Model represents application data and provides rich ORM functionality.
- Views are a rendering of a Model most often using the Django template engine.
- In Django, the controller part of this commonly discussed, layered architecture is a subject of discussion. According to the standard definition, the controller is the layer or component through which the user interacts and model changes occur.
Model, View, Presenter
In MVP, the Presenter contains the UI business logic for the View. All invocations from the View delegate directly to the Presenter. The Presenter is also decoupled directly from the View and talks to it through an interface. This is to allow mocking of the View in a unit test. One common attribute of MVP is that there has to be a lot of two-way dispatching. For example, when someone clicks the Save button, the event handler delegates to the Presenter’s OnSave method. Once the save is completed, the Presenter will then call back the View through its interface so that the View can display that the save has completed.
MVP tends to be a very natural pattern for achieving separated presentation in Web Forms.
Two primary variations (You can find out more about both variants.)
Passive View: The View is as dumb as possible and contains almost zero logic. The Presenter is a middle man that talks to the View and the Model. The View and Model are completely shielded from one another. The Model may raise events, but the Presenter subscribes to them for updating the View. In Passive View there is no direct data binding, instead the View exposes setter properties which the Presenter uses to set the data. All state is managed in the Presenter and not the View.
- Pro: maximum testability surface; clean separation of the View and Model
- Con: more work (for example all the setter properties) as you are doing all the data binding yourself.
Supervising Controller: The Presenter handles user gestures. The View binds to the Model directly through data binding. In this case it’s the Presenter’s job to pass off the Model to the View so that it can bind to it. The Presenter will also contain logic for gestures like pressing a button, navigation, etc.
- Pro: by leveraging databinding the amount of code is reduced.
- Con: there’s less testable surface (because of data binding), and there’s less encapsulation in the View since it talks directly to the Model.
Model, View, Controller
In the MVC, the Controller is responsible for determining which View is displayed in response to any action including when the application loads.
This differs from MVP where actions route through the View to the Presenter. In MVC, every action in the View correlates with a call to a Controller along with an action. In the web each action involves a call to a URL on the other side of which there is a Controller who responds. Once that Controller has completed its processing, it will return the correct View. The sequence continues in that manner throughout the life of the application:
1 2 3 4 Action in the View -> Call to Controller -> Controller Logic -> Controller returns the View.One other big difference about MVC is that the View does not directly bind to the Model. The view simply renders, and is completely stateless. In implementations of MVC the View usually will not have any logic in the code behind. This is contrary to MVP where it is absolutely necessary as if the View does not delegate to the Presenter, it will never get called.
One other pattern to look at is the Presentation Model pattern. In this pattern there is no Presenter. Instead the View binds directly to a Presentation Model. The Presentation Model is a Model crafted specifically for the View. This means this Model can expose properties that one would never put on a domain model as it would be a violation of separation-of-concerns. In this case, the Presentation Model binds to the domain model, and may subscribe to events coming from that Model. The View then subscribes to events coming from the Presentation Model and updates itself accordingly. The Presentation Model can expose commands which the view uses for invoking actions. The advantage of this approach is that you can essentially remove the code-behind altogether as the PM completely encapsulates all of the behaviour for the view.
This pattern is a very strong candidate for use in WPF applications and is also called Model-View-ViewModel.
More: http://reinout.vanrees.org/weblog/2011/12/13/django-mvc-explanation.html
WSGI¶
Web Server Gateway Interface (whis-gey)
- This is a python specification for supporting a common interface between all of the various web frameworks and an application (Apache, for example) that is ‘serving’.
- This allows any WSGI compliant framework to be hosted in any WSGI compliant server.
- For most GeoNode development, the details of this specification may be ignored.
GeoNode and GeoServer¶
GeoNode uses GeoServer for providing OGC services.
At its core, GeoNode provides a standards-based platform to enable integrated, programmatic access to your data via OGC Web Services, which are essential building blocks required to deploy an OGC-compliant spatial data infrastructure (SDI). These Web Services enable discovery, visualization and access your data, all without necessarily having to interact directly with your GeoNode website, look and feel/UI, etc.
GeoNode configures GeoServer via the REST API
GeoNode retrieves and caches spatial information from GeoServer. This includes relevant OGC service links, spatial metadata, and attribute information.
In summary, GeoServer contains the layer data, and GeoNode’s layer model extends the metadata present in GeoServer with its own.
GeoNode can discover existing layers published in a GeoServer via the WMS capabilities document.
GeoServer delegates authentication and authorization to GeoNode.
Data uploaded to GeoNode is first processed in GeoNode and finally published to GeoServer (or ingested into the spatial database).
OGC Web Services:¶
- operate over HTTP (GET, POST)
- provide a formalized, accepted API
- provide formalized, accepted formats
The OGC Web Services provided by GeoNode have a mature implementation base and provide an multi-application approach to integration. This means, as a developer, there are already numerous off-the-shelf GIS packages, tools and webapps (proprietary, free, open source) that natively support OGC Web Services.
There are numerous ways to leverage OGC Web Services from GeoNode:
- desktop GIS
- web-based application
- client libraries / toolkits
- custom development
Your GeoNode lists OGC Web Services and their URLs at http://localhost:8000/developer
. You can use these APIs directly to interact with your GeoNode.
The following sections briefly describe the OGC Web Services provided by GeoNode.
WMS provides an API to retrieve map images (PNG, JPEG, etc.) of geospatial data. WMS is suitable for visualization and when access to raw data is not a requirement.
WFS provides provides an API to retrieve raw geospatial vector data directly. WFS is suitable for direct query and access to geographic features.
WCS provides provides an API to retrieve raw geospatial raster data directly. WCS is suitable for direct access to satellite imagery, DEMs, etc.
CSW provides an interface to publish and search metadata (data about data). CSW is suitable for cataloguing geospatial data and making it discoverable to enable visualization and access.
WMTS provides an API to retrive pre-rendered map tiles of geospatial data.
WMC provides a format to save and load map views and application state via XML. This allows, for example, a user to save their web mapping application in WMC and share it with others, viewing the same content.
More: http://geoserver.org
GeoNode and PostgreSQL/PostGIS¶
In production, GeoNode is configured to use PostgreSQL/PostGIS for it’s persistent store. In development and testing mode, often an embedded sqlite database is used. The latter is not suggested for production.
- The database stores configuration and application information. This includes users, layers, maps, etc.
- It is recommended that GeoNode be configured to use PostgresSQL/PostGIS for storing vector data as well. While serving layers directly from shapefile allows for adequate performance in many cases, storing features in the database allows for better performance especially when using complex style rules based on attributes.
GeoNode and pycsw¶
GeoNode is built with pycsw embedded as the default CSW server component.
Publishing¶
Since pycsw is embedded in GeoNode, layers published within GeoNode are automatically published to pycsw and discoverable via CSW. No additional configuration or actions are required to publish layers, maps or documents to pycsw.
Discovery¶
GeoNode’s CSW endpoint is deployed available at http://localhost:8000/catalogue/csw
and is available for clients to use for standards-based discovery. See http://docs.pycsw.org/en/latest/tools.html for a list of CSW clients and tools.
Javascript in GeoNode¶
GeoNode provides a number of facilities for interactivity in the web browser built on top of several high-quality JavaScript frameworks:
- Bootstrap for GeoNode’s front-end user interface and common user interaction.
- Bower for GeoNode’s front-end package management.
- ExtJS for component-based UI construction and data access
- OpenLayers for interactive mapping and other geospatial operations
- GeoExt for integrating ExtJS with OpenLayers
- Grunt for front-end task automation.
- GXP for providing some higher-level application building facilities on top of GeoExt, as well as improving integration with GeoServer.
- jQuery to abstract javascript manipulation, event handling, animation and Ajax.
GeoNode uses application-specific modules to handle pages and services that are unique to GeoNode. This framework includes:
- A GeoNode mixin class that provides GeoExplorer with the methods needed to properly function in GeoNode. The class is responsible for checking permissions, retrieving and submitting the CSRF token, and user authentication.
- A search module responsible for the GeoNode’s site-wide search functionality.
- An upload and status module to support file uploads.
- Template files for generating commonly used html sections.
- A front-end testing module to test GeoNode javascript.
The following concepts are particularly important for developing on top of the GeoNode’s JavaScript framework.
- Components
- Ext components handle most interactive functionality in “regular” web pages. For example, the scrollable/sortable/filterable table on the default Search page is a Grid component. While GeoNode does use some custom components, familiarity with the idea of Components used by ExtJS is applicable in GeoNode development.
- Viewers
- Viewers display interactive maps in web pages, optionally decorated with Ext controls for toolbars, layer selection, etc. Viewers in GeoNode use the GeoExplorer base class, which builds on top of GXP’s Viewer to provide some common functionality such as respecting site-wide settings for background layers. Viewers can be used as components embedded in pages, or they can be full-page JavaScript applications.
- Controls
- Controls are tools for use in OpenLayers maps (such as a freehand control for drawing new geometries onto a map, or an identify control for getting information about individual features on a map.) GeoExt provides tools for using these controls as ExtJS “Actions” - operations that can be invoked as buttons or menu options or associated with other events.
Components¶
architecture is based on a set of core tools and libraries that provide the building blocks on which the application is built. Having a basic understanding of each of these components is critical to your success as a developer working with GeoNode.
Lets look at each of these components and discuss how they are used within the GeoNode application.
Django¶
GeoNode is based on Django which is a high level Python web development framework that encourages rapid development and clean pragmatic design. Django is based on the Model View Controller (MVC) architecture pattern, and as such, GeoNode models layers, maps and other modules with Django’s Model module and and these models are used via Django’s ORM in views which contain the business logic of the GeoNode application and are used to drive HTML templates to display the web pages within the application.
GeoServer¶
GeoServer is a an open source software server written in Java that provides OGC compliant services which publish data from many spatial data sources. GeoServer is used as the core GIS component inside GeoNode and is used to render the layers in a GeoNode instance, create map tiles from the layers, provide for downloading those layers in various formats and to allow for transactional editing of those layers.
GeoExplorer¶
GeoExplorer is a web application, based on the GeoExt framework, for composing and publishing web maps with OGC and other web based GIS Services. GeoExplorer is used inside GeoNode to provide many of the GIS and cartography functions that are a core part of the application.
PostgreSQL and PostGIS¶
PostgreSQL and PostGIS are the database components that store and manage spatial data and information for GeoNode and the django modules that it is composed of, pycsw and GeoServer. All of these tables and data are stored within a geonode database in PostgreSQL. GeoServer uses PostGIS to store and manage spatial vector data for each layer which are stored as a separate table in the database.
pycsw¶
pycsw is an OGC CSW server implementation written in Python. GeoNode uses pycsw to provide an OGC compliant standards-based CSW metadata and catalogue component of spatial data infrastructures, supporting popular geospatial metadata standards such as Dublin Core, ISO 19115, FGDC and DIF.
Geospatial Python Libraries¶
GeoNode leverages several geospatial python libraries including gsconfig and OWSLib. gsconfig is used to communicates with GeoServer’s REST Configuration API to configure GeoNode layers in GeoServer. OWSLib is used to communicate with GeoServer’s OGC services and can be used to communicate with other OGC services.
Django Pluggables¶
GeoNode uses a set of Django plugins which are usually referred to as pluggables. Each of these pluggables provides a particular set of functionality inside the application from things like Registration and Profiles to interactivity with external sites. Being based on Django enables GeoNode to take advantage of the large ecosystem of these pluggables out there, and while a specific set is included in GeoNode itself, many more are available for use in applications based on GeoNode.
jQuery¶
jQuery is a feature-rich javascript library that is used within GeoNode to provide an interactive and responsive user interface as part of the application. GeoNode uses several jQuery plugins to provide specific pieces of functionality, and the GeoNode development team often adds new features to the interface by adding additional plugins.
Bootstrap¶
Bootstrap is a front-end framework for laying out and styling the pages that make up the GeoNode application. It is designed to ensure that the pages render and look and behave the same across all browsers. GeoNode customizes bootstraps default style and its relatively easy for developers to customize their own GeoNode based site using existing Boostrap themes or by customizing the styles directly.
- Users’ Features
Open Source Geospatial Content Management System
GeoNode is a web-based application and platform for developing geospatial information systems (GIS) and for deploying spatial data infrastructures (SDI).
What GeoNode can be used for… GeoNode Demo (admin/admin)
- Introduction
This section introduces the GeoNode GUI and functionalities through a step-by-step workshop.
At the end of this module the users will be familiar with the GeoNode default GUI and objects.
- Reference Doc
- In this section, you will find information about each and every component of GeoNode, for example GeoServer, GeoNode Settings, Security, etc.
Installation & Admin¶
Welcome to the GeoNode Training Installation & Admin documentation vlatest.
This module is more oriented to users having some System Administrator background.
At the end of this section you will be able to setup from scratch the whole GeoNode infrastructure and understand how to the different pieces are interconnected and which are their dependencies.
Prerequisites
Before proceeding with the reading, it is strongly recommended to be sure having clear the following concepts:
- GeoNode and Django framework basic concepts
- What is Python
- What is a DBMS
- What is a Java Virtual Machine and the JDK
- Basic TCP/IP and networking concepts
- Linux OS basic shell and maintenance commands
- Apache HTTPD Server and WSGI Python bindings
Linux Admin Intro¶
This part of the documentation contains basic instruction on how to setup and manages Virtual Machine.
Ubuntu Basic Tutorial¶
Ubuntu is one of the most widespread Linux Distributions .
In this section of the documentation you will learn how to do basic operations in Ubuntu such as Log inand Log out, Launch applications and Install new software.
User Login¶
When you first start Ubuntu, at the end of the boot process you see the Ubuntu login screen

Select the user you want to login as, enter the password and press Enter. In a few second the user’s desktop will appear.

User Interface Walkthrough¶
The panel on the left side of the screen contains shortcuts to frequently used application. From dark grey bar at the top you can reach network settings (the icon with two arrows pointing in opposite directions) system language (the icon with En written inside it), audio volume, system date and time and power menu (top right corner with an icon half way between a gear and power buttom).
From the power menu you can switch to a different user, logout, power off the system or access system settings.
In `system setting ` menu you can set several different parameters for the system

Launch an application¶
You can launch the applications listed in the Favourites panel simply by clicking on them.
If the application you want to launch is not in the favourites panel, use the Ubuntu Launcher. Click on the Ubuntu Launcher icon in the top left corner of the screen

Write down the name of the application. A list of applications matching the name you are searching will show up, for and press Enter or click on the icon of the application.
Install new software¶
To install new software, open the Ubuntu software Center (you will find it in the favourite applications panel).
Enter the name of the application you are looking for in the search bar

A list of candidate applications will appear. Click on the one you want to install, then click install to install it. You will be prompted for administrative password

And your application will be installed in the system.
Launch the terminal emulator¶
Click on the Ubuntu Launcher icon in the top left corner of the screen, and type gnome-terminal in the search box

And launch the terminal emulator.

Terminal emulator will open and will be ready for your commands.
Basic commands¶
Current working directory¶
$ pwd
/home/geo
The pwd command will show you your working directory, that is the directory you are inside of and running your commands in.
Create a directory¶
$ mkdir test
To create a new directory inside your working directory use the mkdir command followed by the folder name argument
Delete a directory¶
$ rmdir test
To delete an empty directory type rmdir followed by the folder name argument
Create an empty file¶
$ touch testfile
To create an empty file in your current working directory use the touch command followed by the name of the file
Change working directory¶
$ cd /home
To change your current working directory use the cd command followed by the path (location) you want to change to
List content of a folder¶
$ ls
The ls command will list the content of your current working directory. You can optionally provide a path to a directory as argument, in that case ls will show you the content of that directory
$ ls /home
geo geonode
Home folder¶
A user’s home folder is the folder where he or she will do most of the operations in. Inside your home folder you can freely create or delete file and folders.
To switch to your home folder you can use the tilde ~ character as a shortcut
$ cd ~
$ pwd
/home/geo
For more information on Ubuntu refer to the Ubuntu user manual
For more terminal commands read the Using the terminal guide
CentOS Basic Tutorial¶
This section is still under construction.
VM Setup with VirtualBox¶
In this section you will find instructions on how to setup an Ubuntu 14.04 VM in VirtualBox
VirtualBox Setup¶
Download Virtualbox from official web site. Choose the installer matching your operating system and architecture.
Installation process is straightforward, refer to VirtualBox official documentation if you encounter any problem.
Windows¶
After you downloaded executable, double click on it to launch the installer.

Customize VirtualBox features an paths if you need to or leave default ones

And start the installation process


Click on “Finish”

VirtualBox is now installed. And will automatically be launched
Ubuntu¶
After you downloaded the package, double click on it. The “Ubuntu Software Center” will pop up, click on “Install” to start the installation process

You will be prompted for administrator password.
At the end of the installation process, launch VirtualBox.

Virtual Machine Setup¶
Now that VirtualBox is installed on the system it is time to setup our Ubuntu VM.
Click the light blue New button in VirtualBox user interface.
Choose a name for the Virtual Machine and select the appropriate VM type and version

Then select the amount of memory you want to assign to the VM, Ubuntu recommends at least 512 MB of memory but we are going to need more than that to run GeoNode refer to System Preparation & Prerequisites sections for details.

Create a new virtual disk for the VM. Againg, refer to System Preparation & Prerequisites section for details about disk size, for testing purposes 30 GB will be enought.
Now edit the Virtual Machine settigs

Under “Storage” select the empty DVD drive, click on Live CD/DVD as shown below

Click on the DVD icon next to the Optical Drive drop down menu and select the Ubuntu 14.04 .iso file from your file system

Edit other VM setting if you need to, then click OK.
We are ready to start our Ubuntu VM for the first time. Select it from the main menu and click on Start

Ubuntu will start the boot process

At the end of the boot process you will be asked if you want to Try Ubuntu or Install Ubuntu. Select the language in the left panel and click on Install Ubuntu

The installer will check your internet connection and available disk space. If you are connected to the internet check the Download updates while installing checkbox.

Click on continue. In the page you will configure the partitioning of the disks. If you recall we have created a new virtual disk during the VM configuration process for Ubuntu. We are going to assign the entire disk to it. Select Erase disk and install Ubuntu, then Install Now

You will be prompted for confirmation.
Now select the correct time zone for your location, then select the language for the VM and enter the details for the administrator user.



The installation will continue automatically. At the end of the installation process a pop up window will ask you to restart the system to start using Ubuntu. Click on Restart Now

Running a VM with Vagrant¶
In this section you will find instructions on how to setup an Ubuntu 14.04 VM using Vagrant
Vagrant Setup¶
Download Vagrant from the official web site. Choose the installer matching your operating system and architecture.
Installation process is straightforward, refer to Vagrant official documentation if you encounter any problem.
At the end of the installation process log out your system and log back in.
Vagrant is going to need a provider in order to setup the Virtual Machines. VirtualBox is supported out of the box. Just make sure you install one of the supported versions of VirtualBox
Open a terminal and type vagrant version. A message containing the installed version of Vagrant will be printed on the terminal
Installed Version: 1.7.2
Latest Version: 1.7.4
...
Virtual Machine Setup¶
Now that Vagrant is installed let’s create our Ubuntu Virtual Machine. Open the terminal and create a new folder called vagrand. enter the folder and run
vagrant init ubuntu/trusty32
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.
This will create a configuration file called Vagrantfile containing the settings for the virtual machine, notably the config.vm.box variable set to “ubuntu/trusty32” will tell Vagrant the specific VM we want to run (Ubuntu 14.04 “Trusty Tahr”, 32 bit version)
To start the VM, run
vagrant up
The first time you run the command it is going to take some since you do not have a locally available image of the VM. Vagrant will download the VM from the Vagrant Could to your local system.
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'ubuntu/trusty32' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
==> default: Loading metadata for box 'ubuntu/trusty32'
default: URL: https://atlas.hashicorp.com/ubuntu/trusty32
==> default: Adding box 'ubuntu/trusty32' (v20150928.0.0) for provider: virtualbox
default: Downloading: https://atlas.hashicorp.com/ubuntu/boxes/trusty32/versions/20150928.0.0/providers/virtualbox.b
ox
default: Progress: 3% (Rate: 489k/s, Estimated time remaining: 0:11:11)))
At the end of the download process Vagrant will start the VM.
To access the Virtual machine, run
vagrant ssh
Note
You need an SSH client for the previous command to work. Most Linux distributions come with an SSH installed. If you are using Windows as the guest operating system install MinGW or Cygwin or Git to obtain a command line SSH client. More information available here
You will be connected to the guest Virtual Machine over SSH as user vagrant.
Running Ansible scripts¶
Ansible is a free software platform used for configuring and managing computers. It is written in Python and allows users to manage nodes (computers) over SSH.
Configuration files are written in YAML , a simple, human-readable, data serialization format.
Installing Ansible¶
Before you install Ansible make sure you have Python 2.6 or Python 2.7 on the controlling machine, you will also need an SSH client. Most Linux distributions come with an SSH client preinstalled.
If you encounter any problems during the installation, refer to the official documentation
Windows¶
Windows is not supported as a controlling machine.
Running Ansible¶
To test your ansible installation, run the following command
Note
you need a running SSH server on your machine for this to work:
ansible localhost -m ping
You should get the following output:
localhost | success >> {
"changed": false,
"ping": "pong"
}
To test your ansible installation, run the following command (you need a running SSH server on your machine for this to work):
ansible localhost -m ping
You should get the following output:
localhost | success >> {
"changed": false,
"ping": "pong"
}
Ansible Hosts file¶
Ansible keeps information about the managed nodes in the inventory or hosts file. Edit or create the hosts file:
vim /etc/ansible/hosts
This file contains a list of nodes for Ansible to manage. Nodes can be referred either with IP or hostname. The syntax is the following::
192.168.1.50
aserver.example.org
bserver.example.org
You can also arrange hosts in groups:
mail.example.com
[webservers]
foo.example.com
bar.example.com
[dbservers]
one.example.com
two.example.com
three.example.com
Public Key access¶
To avoid having to type users password to connect to the nodes over and over, using SSH keys is recommended.
To setup Public Key SSH access to the nodes. First create a key pair::
ssh-keygen
And follow the instructions on the screen. A new key pair will be generated and placed inside the .ssh folder in your user’s home directory.
All you need to do now is copy the public key (id_rsa.pub) in the authorized_keys file on the node you want to manage, inside the user’s home directory. For example if you want to be able to connect to training.geonode1.com as user geo edit the /home/geo/.ssh/authorized_keys file on the remote machine and add the content of your public key inside the file.
For more information on how to setup SSH keys in Ubuntu refere to this document
Connect to managed nodes¶
Now that SSH access to the managed nodes is in place for all the nodes inside Ansible inventory (hosts file) we can run our first command:
ansible all -m ping -u geo
Note
change geo with the username to use for SSH login
The output will be similar to this::
ansible all -m ping -u geo
84.33.2.70 | success >> {
"changed": false,
"ping": "pong"
}
We asked ansible to connect to all the machine in our Inventory as user geo and run the module (modules are Ansible’s units of work, more on that later..) ping. As you can see by the output, Ansible successfully connected to the remote machine and executed the module ping
Ad hoc commands¶
An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later.
Later you are going to write so called Playbooks with the commands to run on the controlled node but for learning purposes ad-hoc commands can be used to do quick things.
One example of ad-hoc command is the ping command we just run. We typed in in the command line and run it interactively.
Another example::
ansible all -m shell -a "free" -u geo
84.33.2.70 | success | rc=0 >>
total used free shared buffers cached
Mem: 4049236 3915596 133640 0 650560 2487416
-/+ buffers/cache: 777620 3271616
Swap: 4194300 730268 3464032
In this example we ran the free command on the remote hosts to get memory usage stats. Note that we used the shell module (-m flag) with the command as the argument (-a flag)
File Transfer¶
Another use case for the Ansible command is transfer files over SCP:
ansible 84.33.2.70 -m copy -a "src=/home/geo/test dest=~/" -u geo
84.33.2.70 | success >> {
"changed": true,
"dest": "/home/geo/test",
"gid": 1000,
"group": "geo",
"md5sum": "d41d8cd98f00b204e9800998ecf8427e",
"mode": "0664",
"owner": "geo",
"size": 0,
"src": "/home/geo/.ansible/tmp/ansible-tmp-1444051174.15-189094870931130/source",
"state": "file",
"uid": 1000
We used the ansible command to transfer the local file /home/geo/test to the remote node in user’s home directory (‘~/’)
Managing Packages¶
Another use case is installing or upgrading packages on the remote nodes. You can use the apt module to achive this on Debian based systems or the yum module on Red Hat based systems:
ansible 84.33.2.70 -m apt -a "name=apache2 state=present"
For example the previous command will install apache web server on the remote system (if not present).
You can use the same module to make sure a package is at the latest version:
ansible 84.33.2.70 -m apt -a "name=apache2 state=latest"
Managing Services¶
Use the service module to ensure a given service is started on all webservers::
ansible webservers -m service -a "name=httpd state=started"
(where webserver is a group defined in Ansible Inventory)
Restart the service::
ansible webservers -m service -a "name=httpd state=restarted"
Or stop it:
ansible webservers -m service -a "name=httpd state=stopped"
For more information on ad-hoc command refer to the official documentation
These were just a few of the modules avilable for ansible. See the complete list available at Ansible web site
Ansible Playbooks¶
Playbooks are Ansible’s configuration, deployment, and orchestration language.
Playbooks are a completely different way to use ansible than in adhoc task execution mode, and are particularly powerful.
Playbooks can declare configurations, but they can also orchestrate steps of any manual ordered process.
While you might run the main /usr/bin/ansible program for ad-hoc tasks, playbooks are more likely to be kept in source control and used to push out your configuration or assure the configurations of your remote systems are in spec.
Playbooks languuage example¶
Playbooks are expressed in YAML format
Here is an example of a Playbook::
---
- hosts: webservers
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version
yum: pkg=httpd state=latest
- name: write the apache config file
template: src=/srv/httpd.j2 dest=/etc/httpd.conf
notify:
- restart apache
- name: ensure apache is running (and enable it at boot)
service: name=httpd state=started enabled=yes
handlers:
- name: restart apache
service: name=httpd state=restarted
Every Playbook begins with three dashes — at the very top of the file to indicate that this is a YAML file.
This example Playbook contains only one Play. the play is composed of three parts
- hosts
- tasks
- handlers
The hosts part specifies to wich hosts in the Inventory this playbook applies and how to connect to them
The tasks part describes the desired state or actions to perform on the hosts
The handlers part describes the handlers for this playbook (more on handlers later)
In the example above there are three tasks. Each task has a name, a module and zero or more arguments for the module.
The first task specifies that we want apache at the latest version installed on the system. This is accomplished by the yum module
The second task specifies a configuration file for apache using a template. Template files for are written in Jinja2 template language
The third task make sure apache web server is running using the service module.
When you run a Playbook, using the ansible-playbook command. Ansible will connect to the hosts specified in the hosts section and run the tasks one by one, in order.
One or more tasks may have a notify section (just like the second task in our example). The notify actions are triggered at the end of each block of tasks in a playbook, and will only be triggered once even if notified by multiple different tasks. When triggered, the corresponding handler will be executed. In the example above the handler will restart Apache because we changed a config file
Run a Playbook¶
Now that we have created a sample playbook save it on the file system and execute it::
ansible-playbook test.yml -u geo
PLAY [84.33.2.70] *************************************************************
GATHERING FACTS ***************************************************************
ok: [84.33.2.70]
TASK: [test] ******************************************************************
ok: [84.33.2.70]
PLAY RECAP ********************************************************************
84.33.2.70 : ok=2 changed=0 unreachable=0 failed=0
This concludes our brief tutorial on Ansible. For a more thorough introduction refer the official documentation
Alse, take a look at the Ansible examples repository for a set of Playbooks showing common techniques.
GeoNode (vlatest) installation on Ubuntu 14.04¶
This part of the documentation describes the complete setup process for GeoNode on an Ubuntu 14.04 machine. The end of this section of the documentation also describes how to automate the installation and comfiguration of GeoNode using Ansible.
Install GeoNode Application¶
In this section you are going to install all the basic packages and tools needed for a complete GeoNode installation.
Login¶
When you first start the Virtual Machine at the end of the boot process you will be prompted for the user password to login. Enter geo as user password and press Enter.

You are now logged in as user ‘geo’. On the left side of the screen there is a panel with shortcuts to common applications, launch a the terminal emulator.

Packages Installation¶
First we are going to install all the software packages we are going to need for the GeoNode setup. Among others Tomcat 7, PostgreSQL, PostGIS, Apache HTTP server and Git. Run the following command to install all the packages:
sudo apt-get install \
python-virtualenv \
build-essential \
openssh-server \
apache2 \
gcc \
gdal-bin \
gettext \
git-core \
libapache2-mod-wsgi \
libgeos-dev \
libjpeg-dev \
libpng-dev \
libpq-dev \
libproj-dev \
libxml2-dev \
libxslt-dev \
openjdk-7-jre \
patch \
postgresql \
postgis \
postgresql-9.3-postgis-scripts \
postgresql-contrib \
python \
python-dev \
python-gdal \
python-pycurl \
python-imaging \
python-pastescript \
python-psycopg2 \
python-support \
python-urlgrabber \
python-virtualenv \
tomcat7 \
unzip \
zip

You will be prompted for geo user’s password (geo) and for confirmation twice

The installation process is going to take a few minutes.
At this point we have all the packages we need on the system.
GeoNode Setup¶
Let’s download GeoNode from the main GeoNode repository on GitHub:
Note
For the purpose of this training the GeoNode repository has been downloaded and placed in the geonode user home folder (/home/geo/geonode). Skip the following command group.
git clone https://github.com/GeoNode/geonode.git
sudo useradd -m geonode
sudo mv ~/geonode /home/geonode/
Move into the project folder
cd /home/geonode/geonode
And install GeoNode
Note
For the purpose of this training GeoNode python packages has been downloaded and installed, Skip the following command:
sudo pip install -e .
The following command will download a GeoServer web archive that we are going to use in GeoServer setup:
Note
Again, for the purpose of this training the GeoServer web archive has been downloaded. Skip the following command.
sudo paver setup
In the next section we are going to setup PostgreSQL Databases for GeoNode
Create GeoNode DB¶
In this section we are going to setup users and databases for GeoNode in PostgreSQL.
Users and Permissions¶
First create the geonode user. GeoNode is going to use this user to access the database:
sudo -u postgres createuser -P geonode
You will be prompted asked to set a password for the user. Enter geonode as password
Create geonode database with owner geonode:
sudo -u postgres createdb -O geonode geonode
And database geonode_data with owner geonode:
sudo -u postgres createdb -O geonode geonode_data
Switch to user postgres and create PostGIS extension:
sudo su postgres
psql -d geonode_data -c 'CREATE EXTENSION postgis;'
Then adjust permissions:
psql -d geonode_data -c 'GRANT ALL ON geometry_columns TO PUBLIC;'
psql -d geonode_data -c 'GRANT ALL ON spatial_ref_sys TO PUBLIC;'
And switch back to the ‘geo’ user:
exit
Now we are going to change user access policy for local connections in file pg_hba.conf:
sudo gedit /etc/postgresql/9.3/main/pg_hba.conf
Scroll down to the bottom of the document. We only need to edit one line. Change:
# "local" is for Unix domain socket connections only
local all all peer
Into::
# "local" is for Unix domain socket connections only
local all all trust
Note
If your PostgreSQL database resides on a separate machine, you have to allow remote access to the databases in the pg_hba.conf for the geonode user and tell PostgreSQL to accept non local connections in your postgresql.conf file
Then restart PostgreSQL to make the change effective::
sudo service postgresql restart
PostgreSQL is now ready. To test the configuration try to connect to the geonode database as geonode::
psql -U geonode geonode
Setup & Configure HTTPD¶
In this section we are going to setup Apache HTTP to serve GeoNode.
Apache Configuration¶
Navigate to Apache configurations folder::
cd /etc/apache2/sites-available
And create a new configuration file for GeoNode::
sudo gedit geonode.conf
Place the following content inside the file::
WSGIDaemonProcess geonode python-path=/home/geonode/geonode:/home/geonode/.venvs/geonode/lib/python2.7/site-packages user=www-data threads=15 processes=2
<VirtualHost *:80>
ServerName http://localhost
ServerAdmin webmaster@localhost
DocumentRoot /home/geonode/geonode/geonode
ErrorLog /var/log/apache2/error.log
LogLevel warn
CustomLog /var/log/apache2/access.log combined
WSGIProcessGroup geonode
WSGIPassAuthorization On
WSGIScriptAlias / /home/geonode/geonode/geonode/wsgi.py
Alias /static/ /home/geonode/geonode/geonode/static_root/
Alias /uploaded/ /home/geonode/geonode/geonode/uploaded/
<Directory "/home/geonode/geonode/geonode/">
<Files wsgi.py>
Order deny,allow
Allow from all
Require all granted
</Files>
Order allow,deny
Options Indexes FollowSymLinks
Allow from all
IndexOptions FancyIndexing
</Directory>
<Directory "/home/geonode/geonode/geonode/static_root/">
Order allow,deny
Options Indexes FollowSymLinks
Allow from all
Require all granted
IndexOptions FancyIndexing
</Directory>
<Directory "/home/geonode/geonode/geonode/uploaded/thumbs/">
Order allow,deny
Options Indexes FollowSymLinks
Allow from all
Require all granted
IndexOptions FancyIndexing
</Directory>
<Directory "/home/geonode/geonode/geonode/uploaded/layers/">
Order allow,deny
Options Indexes FollowSymLinks
Allow from all
Require all granted
IndexOptions FancyIndexing
</Directory>
<Proxy *>
Order allow,deny
Allow from all
</Proxy>
ProxyPreserveHost On
ProxyPass /geoserver http://127.0.0.1:8080/geoserver
ProxyPassReverse /geoserver http://127.0.0.1:8080/geoserver
</VirtualHost>
This sets up a VirtualHost in Apache HTTP server for GeoNode and a reverse proxy for GeoServer.
Note
In the case that GeoServer is running on a separate machine change the ProxyPass and ProxyPassReverse accordingly
Now load apache poxy module:
sudo a2enmod proxy_http
And enable geonode configuration file:
sudo a2ensite geonode
Dowload GeoNode data to be served by Apache. You will be prompted for confirmation:
cd /home/geonode/geonode/
sudo -u geonode python manage.py collectstatic
Add thumbs and layers folders:
sudo mkdir -p /home/geonode/geonode/geonode/uploaded/thumbs
sudo mkdir -p /home/geonode/geonode/geonode/uploaded/layers
Change permissions on GeoNode files and folders to allow Apache to read and edit them::
sudo chown -R geonode /home/geonode/geonode/
sudo chown geonode:www-data /home/geonode/geonode/geonode/static/
sudo chown geonode:www-data /home/geonode/geonode/geonode/uploaded/
chmod -Rf 777 /home/geonode/geonode/geonode/uploaded/thumbs
chmod -Rf 777 /home/geonode/geonode/geonode/uploaded/layers
sudo chown www-data:www-data /home/geonode/geonode/geonode/static_root/
Finally restart Apache to load the new configuration:
sudo service apache2 restart
Install GeoServer Application¶
In this section we are going to setup GeoServer for GeoNode. GeoServer will run inside Tomcat sevrlet container.
Setup GeoServer¶
You’ ve already installed Tomcat 7 in the system in the first section of the training. Before you deploy GeoServer stop the running Tomcat instance::
sudo service tomcat7 stop
Now copy the downloaded GeoServer archive inside Tomcat’s webapps folder:
sudo cp /home/geonode/geonode/downloaded/geoserver.war /var/lib/tomcat7/webapps/
Test GeoServer¶
Now start Tomcat to deploy GeoServer:
sudo service tomcat7 start
Tomcat will extract GeoServer web archive and start GeoServer. This may take some time
Open a web browser (in this example Firefox) and navigate to http://localhost:8080/geoserver

In a few seconds GeoServer web interface will show up:

GeoNode authentication integration¶
All we need to do now is to integrate GeoNode authentication so that GeoNode administrator will b able to access and administer GeoServer as well.
Stop GeoServer:
sudo service tomcat7 stop
And navigate to /var/lib/tomcat7/webapps/geoserver/WEB-INF/:
cd /var/lib/tomcat7/webapps/geoserver/WEB-INF/
Edit web.xml with a text editor:
sudo gedit web.xml
And add the following::
<context-param>
<param-name>GEONODE_BASE_URL</param-name>
<param-value>http://localhost/</param-value>
</context-param>
Note
If GeoServer is installed on machine that is separate from the one running GeoNode, change the GEONODE_BASE_URL value accordingly
The resulting file should look like this::
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>
<display-name>GeoServer</display-name>
<context-param>
<param-name>GEONODE_BASE_URL</param-name>
<param-value>http://localhost/</param-value>
</context-param>
<context-param>
<param-name>serviceStrategy</param-name>
<!-- Meaning of the different values :
PARTIAL-BUFFER2
- Partially buffers the first xKb to disk. Once that has buffered, the the
result is streamed to the user. This will allow for most errors to be caught
early.
...
Note
If GeoServer is installed on machine that is separate from the one running GeoNode there is one more step to follow. Edit config.xml file as follows::
cd /var/lib/tomcat7/webapps/geoserver/data/security/auth/geonodeAuthProvider
sudo gedit config.xml
Set the baseUrl to GeoNode IP or hostname:
<org.geonode.security.GeoNodeAuthProviderConfig>
<id>-54fbcd7b:1402c24f6bc:-7fe9</id>
<name>geonodeAuthProvider</name>
<className>org.geonode.security.GeoNodeAuthenticationProvider</className>
<baseUrl>http://localhost/</baseUrl>
</org.geonode.security.GeoNodeAuthProviderConfig>
Restart GeoServer to make the changes effective:
sudo service tomcat7 start
Finish installation¶
In previous sections you’ ve setup all the applications we need to run GeoNode.
In this section we are going to glue all the pieces together and finalize GeoNode setup.
GeoNode Configuration¶
Now that all applications have been configured we are going to instruct GeoNode on how to connect to PostgreSQL and GeoServer. Also we are going to instruct GeoNode on who is allowed to connect to it.
First navigate to geonode configuration folder:
cd /home/geonode/geonode/geonode/
Copy the local_settings.py sample file called local_settings.py.sample:
sudo cp local_settings.py.sample local_settings.py
Then edit the configuration file:
sudo gedit local_settings.py
Add the ALLOWED_HOSTS and PROXY_ALLOWED_HOSTS variables at the top with the following values::
ALLOWED_HOSTS = ['127.0.0.1', 'localhost', '::1']
PROXY_ALLOWED_HOSTS = ("127.0.0.1", 'localhost', '::1')
Add the POSTGIS_VERSION variable matching your PostGIS version::
POSTGIS_VERSION = (2, 1, 2)
This will instruct GeoNode to listen on connections from your local machine.
Change the value of the SITEURL:
SITEURL = "http://localhost/"
Now configure database access: Uncomment the ENGINE: django.contrib.gis.db.backends.postgis line and comment the one with empty ENGINE variable. Also set the NAME variable to geonode_data:
DATABASES = {
'default': {
...
},
'datastore' : {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
#'ENGINE': '', # Empty ENGINE name disables,
'NAME': 'geonode_data',
...
}
Then configure GeoServer location: Change the value of the LOCATION and PUBLIC_LOCATION variables as follows::
OGC_SERVER = {
'default' : {
...
'LOCATION' : 'http://localhost/geoserver/',
'PUBLIC_LOCATION' : 'http://localhost/geoserver/',
...
}
The resulting configuration file should look like this::
import os
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
SITEURL = "http://localhost/"
ALLOWED_HOSTS = ['127.0.0.1', 'localhost', '::1']
PROXY_ALLOWED_HOSTS = ("127.0.0.1", 'localhost', '::1')
POSTGIS_VERSION = (2, 1, 2)
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'geonode',
'USER': 'geonode',
'PASSWORD': 'geonode',
},
# vector datastore for uploads
'datastore' : {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
#'ENGINE': '', # Empty ENGINE name disables
'NAME': 'geonode_data',
'USER' : 'geonode',
'PASSWORD' : 'geonode',
'HOST' : 'localhost',
'PORT' : '5432',
}
}
# OGC (WMS/WFS/WCS) Server Settings
OGC_SERVER = {
'default' : {
'BACKEND' : 'geonode.geoserver',
'LOCATION' : 'http://localhost/geoserver/',
'PUBLIC_LOCATION' : 'http://localhost/geoserver/',
'USER' : 'admin',
'PASSWORD' : 'geoserver',
'MAPFISH_PRINT_ENABLED' : True,
'PRINT_NG_ENABLED' : True,
'GEONODE_SECURITY_ENABLED' : True,
'GEOGIG_ENABLED' : False,
'WMST_ENABLED' : False,
'BACKEND_WRITE_ENABLED': True,
'WPS_ENABLED' : False,
'LOG_FILE': '%s/geoserver/data/logs/geoserver.log' % os.path.abspath(os.path.join(PROJECT_ROOT, os.pardir)),
# Set to name of database in DATABASES dictionary to enable
'DATASTORE': '', #'datastore',
}
}
CATALOGUE = {
'default': {
# The underlying CSW implementation
# default is pycsw in local mode (tied directly to GeoNode Django DB)
'ENGINE': 'geonode.catalogue.backends.pycsw_local',
# pycsw in non-local mode
# 'ENGINE': 'geonode.catalogue.backends.pycsw_http',
# GeoNetwork opensource
# 'ENGINE': 'geonode.catalogue.backends.geonetwork',
# deegree and others
# 'ENGINE': 'geonode.catalogue.backends.generic',
# The FULLY QUALIFIED base url to the CSW instance for this GeoNode
'URL': '%scatalogue/csw' % SITEURL,
# 'URL': 'http://localhost:8080/geonetwork/srv/en/csw',
# 'URL': 'http://localhost:8080/deegree-csw-demo-3.0.4/services',
# login credentials (for GeoNetwork)
'USER': 'admin',
'PASSWORD': 'admin',
}
}
# Default preview library
#LAYER_PREVIEW_LIBRARY = 'geoext'
Create GeoNode Superuser¶
Now create the admin user for GeoNode running the following::
python manage.py createsuperuser
You will be prompted for the username, email address and passoword for the user
Initialize the Database¶
We’ we already setup GeoNode databases and user permissions for PostgreSQL. And instruct GeoNode on how to access the database. Now let’s get the database ready running the following::
cd /home/geonode/geonode
python manage.py syncdb --noinput
If you get an error message it is likely that database access is not set properly. Check you pg_hba.conf file and GeoNode local_settings.py file. Remember to restart PostgreSQL to make any changes to pg_hba.conf effective.
Test that you are able to connect to the database as follows::
psql -U geonode geonode
and:
psql -U geonode geonode_data
Test the installation¶
We are ready to restart GeoNode (Apache) and test the installation. Restart Apache::
sudo service apache2 restart
Open the browser and navigate to http://localhost/
GeoNode User interface will show up. Login with admin username and password you just set.


Now open the main menu and click on GeoServer

You will be redirected to GeoServer user interface. You will automatically be logged in as administrator in GeoServer.

Setup Using Ansible¶
Using Ansible we can automate the installation and configuration of GeoNode on local or remote machines. If you never used Ansible before take a look at the Running Ansible scripts section of the documentation or Ansible official docmumentation
GeoNode setup with Ansible¶
In the Virtual Machines we provided for the training come with Ansible preinstalled. In the geo user home folder ( /home/geo/ ) you will find a simple Ansible playbook we wrote for demonstration along with some resource files used by the playbook itself.
To start the installation with Ansible open a terminal and navigate to the ansible folder inside geo’s home folder:
cd /home/geo/ansible
Then run the playbook using the ansible-playbook command:
ansible-playbook playbook.yml
Ansible will run through the instructions in the playbook and setup GeoNode on your local machine.
Ansible will print information about each step executed. The whole setup is going to take a few minutes.
At the end of the execution of the playbook all you need to do is set GeoNode administrator’s password:
cd /home/geonode/geonode
python manage.py changepassword sysadmin
GeoNode (vlatest) installation on CentOS 7¶
This part of the documentation describes the complete setup process for GeoNode on a CentOS 7 machine.
Installing the Operating System¶
CentOS Setup¶
We are re going to install a minimal CentOS 7 distribution. You can get a copy of the .iso the image used for the installation here.
Boot up the installation DVD and start the CentOS 7 Installation wizard.
Under Select Date & Time an set appropriate Date and Time settings
Under Keyboard and choose the keyboard layout
Under Installation Destination select the hard disk where CentOS will be installed.
Create a custom partitioning scheme as follows:
Partition Label Partition Type Size Mount Point boot ext3 700 MB /boot root ext4 35 GB / swap swap 4 GB Under Networking configure your network interface according to your infrastructure you can either set it to DHCP to automatically get all the settings from a local DHCP server or configure it by hand.
Enable the network interface, then go back to Select Date & Time and enable NTP synchronization periodically get date and time settings from CentOS servers
Click on Begin Installation
Now set the password for the root user. Also click on User Creation to create the geo user.
Wait for the installation process to finish, then reboot your machine
Network configuration¶
The network configuration should already be set, since it was set during CentOS setup stage.
You may want to review the configuration files
/etc/sysconfig/network-scripts/ifcfg-DEVICE
You may also want to review the file /etc/resolv.conf
to check the nameservers.
Check that the connection is up by pinging and external server:
ping 8.8.8.8
Check that the DNS are properly configuring by pinging a host by its name:
ping google.com
Attention
Please note that in CentOS only ssh incoming connections are allowed; all other incoming connections are disabled by default.
In the paragraph related to the httpd service you can find details about how to enable incoming traffic.
Note that after configuring the network, you may continue installing the system setup using a ssh connection.
User access configuration¶
Login as root'` user and give the ``geo
user administrative privileges
by adding him to the wheel
group:
usermod -aG wheel geo
SSH access¶
On CentOS 7 the firewall is enabled by default. To allow SSH clients to connect to the machine allow incoming connections on port 22:
firewall-cmd --zone=public --add-port=22/tcp --permanent
firewall-cmd --zone=public --add-service=ssh --permanent
firewall-cmd --reload
Warning
Before you disable root login make sure you are able to login via SSH with geo user account and you have the privileges to run sudo su to switch to the root user account.
Edit file /etc/ssh/sshd_config to disable root login via SSH:
PermitRootLogin no
Public key authentication is generally considered a safer way to authenticate users for SSH access. Let’s set it up and disable password based authentication
First generate a public/private key pair using ssh-keygen:
ssh-keygen
Folllow the procedure, you will end up with your newly generated key under ~/.ssh
Now copy your public (by default it is called id_rsa.pub) key over the CentOS
machine in /home/geo/.ssh/authorized_keys
. There are several ways to do
it, we are going to use the ssh-copy-id tool:
ssh-copy-id -i ~/.ssh/id_rsa.pub geo@<server-ip-address>
You should now be able to login via SSH as geo without been asked for the password:
ssh geo@<server-ip-address>
You can now disable password based login over SSH
Warning
Before disabling password authentication make sure you’ ve installed your public key on the server and you are able to login without password
Edit /etc/ssh/sshd_config
as follows:
...
RSAAuthentication yes
...
PubkeyAuthentication yes
...
PasswordAuthentication no
...
UsePAM no
...
Installing ntp¶
Install the program for ntp server synchronization:
yum install ntp
Edit /etc/ntp.conf
and add the following line before the first server
directive:
server tempo.ien.it # Galileo Ferraris
Replace tempo.ien.it
with your nearest ntp server.
Sync with the server by issuing:
systemctl start ntpd
Set the time synchronization as an autostarting daemon:
systemctl enable ntpd
Install GeoNode application¶
Install required libs¶
Make sure all the needed libraries are installed:
sudo yum install -y git gdal gdal-python geos python-pip python-imaging \
python-virtualenv python-devel gcc-c++ python-psycopg2 libxml2 \
libxml2-devel libxml2-python libxslt libxslt-devel libxslt-python
Create geonode user¶
Create the user:
adduser -m geonode
Install GeoNode¶
Install GeoNode sources from official repository:
git clone https://github.com/GeoNode/geonode.git
Move the sources in geonode user home folder:
mv geonode /home/geonode
Navigate to sources folder and install required packages:
cd /home/geonode/geonode
pip install -e .
Edit settings¶
GeoNode Configuration¶
Now that all applications have been configured we are going to instruct GeoNode on how to connect to PostgreSQL and GeoServer. Also we are going to instruct GeoNode on who is allowed to connect to it.
First navigate to geonode configuration folder:
cd /home/geonode/geonode/geonode/
Copy the local_settings.py sample file called local_settings.py.sample:
cp local_settings.py.sample local_settings.py
Then edit the configuration file:
vim local_settings.py
Add the ALLOWED_HOSTS and PROXY_ALLOWED_HOSTS variables at the top with the following values::
ALLOWED_HOSTS = ['127.0.0.1', 'localhost', '::1']
PROXY_ALLOWED_HOSTS = ("127.0.0.1", 'localhost', '::1']
Add the POSTGIS_VERSION variable matching your PostGIS version::
POSTGIS_VERSION = (2, 1, 8)
This will instruct GeoNode to listen on connections from your local machine.
Change the value of the SITEURL:
SITEURL = "http://localhost/"
Now configure database access: Uncomment the ENGINE’: ‘django.contrib.gis.db.backends.postgis line and comment the one with empty ENGINE variable. Also set the NAME variable to geonode_data:
DATABASES = {
'default': {
...
},
'datastore' : {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
#'ENGINE': '', # Empty ENGINE name disables,
'NAME': 'geonode_data',
...
}
Then configure GeoServer location: Change the value of the LOCATION and PUBLIC_LOCATION variables as follows::
OGC_SERVER = {
'default' : {
...
'LOCATION' : 'http://localhost/geoserver/',
'PUBLIC_LOCATION' : 'http://localhost/geoserver/',
...
}
The resulting configuration file should look like this::
import os
PROJECT_ROOT = os.path.abspath(os.path.dirname(__file__))
SITEURL = "http://localhost/"
ALLOWED_HOSTS = ['127.0.0.1', 'localhost', '::1']
PROXY_ALLOWED_HOSTS = ("127.0.0.1", 'localhost', '::1')
POSTGIS_VERSION = (2, 1, 8)
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'geonode',
'USER': 'geonode',
'PASSWORD': 'geonode',
},
# vector datastore for uploads
'datastore' : {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
#'ENGINE': '', # Empty ENGINE name disables
'NAME': 'geonode_data',
'USER' : 'geonode',
'PASSWORD' : 'geonode',
'HOST' : 'localhost',
'PORT' : '5432',
}
}
# OGC (WMS/WFS/WCS) Server Settings
OGC_SERVER = {
'default' : {
'BACKEND' : 'geonode.geoserver',
'LOCATION' : 'http://localhost/geoserver/',
'PUBLIC_LOCATION' : 'http://localhost/geoserver/',
'USER' : 'admin',
'PASSWORD' : 'geoserver',
'MAPFISH_PRINT_ENABLED' : True,
'PRINT_NG_ENABLED' : True,
'GEONODE_SECURITY_ENABLED' : True,
'GEOGIG_ENABLED' : False,
'WMST_ENABLED' : False,
'BACKEND_WRITE_ENABLED': True,
'WPS_ENABLED' : False,
'LOG_FILE': '%s/geoserver/data/logs/geoserver.log' % os.path.abspath(os.path.join(PROJECT_ROOT, os.pardir)),
# Set to name of database in DATABASES dictionary to enable
'DATASTORE': '', #'datastore',
}
}
CATALOGUE = {
'default': {
# The underlying CSW implementation
# default is pycsw in local mode (tied directly to GeoNode Django DB)
'ENGINE': 'geonode.catalogue.backends.pycsw_local',
# pycsw in non-local mode
# 'ENGINE': 'geonode.catalogue.backends.pycsw_http',
# GeoNetwork opensource
# 'ENGINE': 'geonode.catalogue.backends.geonetwork',
# deegree and others
# 'ENGINE': 'geonode.catalogue.backends.generic',
# The FULLY QUALIFIED base url to the CSW instance for this GeoNode
'URL': '%scatalogue/csw' % SITEURL,
# 'URL': 'http://localhost:8080/geonetwork/srv/en/csw',
# 'URL': 'http://localhost:8080/deegree-csw-demo-3.0.4/services',
# login credentials (for GeoNetwork)
'USER': 'admin',
'PASSWORD': 'admin',
}
}
# Default preview library
#LAYER_PREVIEW_LIBRARY = 'geoext'
Initialize GeoNode¶
As user geonode
, init the db, by creating the schema tables and populating the static data::
cd /home/geonode/geonode/geonode/
python manage.py syncdb --noinput
Now create the admin user for GeoNode running the following::
python manage.py createsuperuser
You will be prompted for the username, email address and passoword for the user
Dowload GeoNode data to be served by Apache. You will be prompted for confirmation:
python manage.py collectstatic
Create uploaded folder:
mkdir /home/geonode/geonode/geonode/uploaded/
Change permissions on GeoNode files and folders to allow Apache to read and edit them::
chmod +x /home/geonode/
chown -R geonode /home/geonode/geonode/
chown apache:apache /home/geonode/geonode/geonode/static/
chown apache:apache /home/geonode/geonode/geonode/uploaded/
chown apache:apache /home/geonode/geonode/geonode/static_root/
Tomcat Installation¶
Installing Java¶
We’ll need a JDK to run GeoServer.
You may already have the OpenJDK package (java-1.7.0-openjdk-devel.x86_64
) installed.
Check and see if Java is already installed:
# java -version
java version "1.7.0_51"
OpenJDK Runtime Environment (rhel-2.4.4.1.el6_5-x86_64 u51-b02)
OpenJDK 64-Bit Server VM (build 24.45-b08, mixed mode)
# javac -version
javac 1.7.0_51
Otherwise install it running::
yum install java-1.7.0-openjdk-devel
Once done, the command java -version
should return info about the installed version.
If java version does not match the one just installed, run the following command:
alternatives --set java /usr/lib/jvm/java-1.8.0-openjdk-1.8.0.60=2.b27.el7_1.x86_64/jre/bin/java
Oracle JDK¶
Until recently, the Oracle JDK was a better performer than the OpenJDK, so it was the preferred choice. This is no longer true, anyway in the following paragraph you can find instruction about how to install the Oracle JDK.
You can download the Oracle JDK RPM from this page:
Oracle does not expose a URL to automatically dowload the JDK because an interactive licence acceptance is requested. You may start downloading the JDK RPM from a browser, and then either:
- stop the download from the browser and use on the server the dynamic download URL your browser has been assigned, or
- finish the download and transfer the JDK RPM to the server using
scp
.
Once you have the .rpm
file, you can install it by:
rpm -ivh jdk-7u51-linux-x64.rpm
Once installed, you still see that the default java
and javac
commands
are still the ones from OpenJDK.
In order to switch JDK version you have to set the proper system alternatives.
You may want to refer to this page . Issue the command:
alternatives --install /usr/bin/java java /usr/java/latest/bin/java 200000 \
--slave /usr/lib/jvm/jre jre /usr/java/latest/jre \
--slave /usr/lib/jvm-exports/jre jre_exports /usr/java/latest/jre/lib \
--slave /usr/bin/keytool keytool /usr/java/latest/jre/bin/keytool \
--slave /usr/bin/orbd orbd /usr/java/latest/jre/bin/orbd \
--slave /usr/bin/pack200 pack200 /usr/java/latest/jre/bin/pack200 \
--slave /usr/bin/rmid rmid /usr/java/latest/jre/bin/rmid \
--slave /usr/bin/rmiregistry rmiregistry /usr/java/latest/jre/bin/rmiregistry \
--slave /usr/bin/servertool servertool /usr/java/latest/jre/bin/servertool \
--slave /usr/bin/tnameserv tnameserv /usr/java/latest/jre/bin/tnameserv \
--slave /usr/bin/unpack200 unpack200 /usr/java/latest/jre/bin/unpack200 \
--slave /usr/share/man/man1/java.1 java.1 /usr/java/latest/man/man1/java.1 \
--slave /usr/share/man/man1/keytool.1 keytool.1 /usr/java/latest/man/man1/keytool.1 \
--slave /usr/share/man/man1/orbd.1 orbd.1 /usr/java/latest/man/man1/orbd.1 \
--slave /usr/share/man/man1/pack200.1 pack200.1 /usr/java/latest/man/man1/pack200.1 \
--slave /usr/share/man/man1/rmid.1.gz rmid.1 /usr/java/latest/man/man1/rmid.1 \
--slave /usr/share/man/man1/rmiregistry.1 rmiregistry.1 /usr/java/latest/man/man1/rmiregistry.1 \
--slave /usr/share/man/man1/servertool.1 servertool.1 /usr/java/latest/man/man1/servertool.1 \
--slave /usr/share/man/man1/tnameserv.1 tnameserv.1 /usr/java/latest/man/man1/tnameserv.1 \
--slave /usr/share/man/man1/unpack200.1 unpack200.1 /usr/java/latest/man/man1/unpack200.1
Then run
alternatives --config java
and select the number related to /usr/java/latest/bin/java
.
Now the default java version should be the Oracle one. Verify the proper installation on the JDK:
# java -version
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)
# javac -version
javac 1.7.0_51
Installing Tomcat¶
Tomcat¶
Let’s install Tomcat first:
sudo yum install -y tomcat
Then prepare a clean instance called base
to be used as a template
for all tomcat instances:
sudo mkdir /var/lib/tomcats/base
sudo cp -a /usr/share/tomcat/* /var/lib/tomcats/base/
Then create GeoServer’s instance directory structure:
sudo mkdir /var/lib/tomcats/geoserver
sudo cp -a /usr/share/tomcat/* /var/lib/tomcats/geoserver/
Instance manager script¶
Copy the existing management script:
sudo cp /usr/lib/systemd/system/tomcat.service \
/usr/lib/systemd/system/tomcat\@geoserver.service
Edit the EnvironmentFile variable in service management file as follows:
sudo vim /usr/lib/systemd/system/tomcat\@geoserver.service
# Systemd unit file for default tomcat
#
# To create clones of this service:
# DO NOTHING, use tomcat@.service instead.
[Unit]
Description=Apache Tomcat Web Application Container
After=syslog.target network.target
[Service]
Type=simple
EnvironmentFile=/etc/tomcat/tomcat.conf
Environment="NAME="
EnvironmentFile=-/etc/sysconfig/tomcat@geoserver
ExecStart=/usr/libexec/tomcat/server start
ExecStop=/usr/libexec/tomcat/server stop
SuccessExitStatus=143
User=tomcat
Group=tomcat
[Install]
WantedBy=multi-user.target
Create the associated configuration file from template:
sudo cp /etc/sysconfig/tomcat /etc/sysconfig/tomcat\@geoserver
Edit the configuration file and customize the CATALINA_HOME and CATALINA_BASE variables:
...
CATALINA_BASE="/var/lib/tomcats/geoserver"
CATALINA_HOME="/usr/share/tomcat"
...
Now copy GeoServer web archive inside the webapps folder. Tomcat will extract the war file and run GeoServer:
sudo cp geoserver.war /var/lib/tomcats/geoserver/webapps/
And fix the permissions on the files:
sudo chown -R tomcat:tomcat /var/lib/tomcats*
Finally start GeoServer:
systemctl start tomcat@geoserver
And enable it to automatically start at boot time:
systemctl enable tomcat@geoserver
Apache HTTP Server Installation¶
Install Apache:
sudo yum install -y httpd
And additional modules:
sudo yum install -y mod_ssl mod_proxy_html mod_wsgi
Firewall configuration¶
Allow requests on port 80 through the firewall:
firewall-cmd --zone=public --add-service=http --permanent
firewall-cmd --reload
Security issues¶
There are a couple of security issues to fix when dealing with GeoNode.
GeoNode will run inside httpd through WSGI. This means that httpd will try to perform external connection toward the DB. This is usually blocked by default by strict security policies, so we need to relax them:
setsebool -P httpd_can_network_connect_db 1
The other issue is about SELinux itself: it is not WSGI friendly, so we’ll have to disable it.
Edit the file /etc/sysconfig/selinux
and change the line:
SELINUX=enforcing
into
SELINUX=permissive
and reboot the machine.
httpd configuration¶
As root`, create the file ``/etc/httpd/conf.d/geonode.conf
and insert into it this content
.
Add thumbs and layers folders:
sudo mkdir -p /home/geonode/geonode/geonode/uploaded/thumbs
sudo mkdir -p /home/geonode/geonode/geonode/uploaded/layers
Set appropriate permissions on the folders:
sudo chown -R geonode /home/geonode/geonode/
sudo chown geonode:www-data /home/geonode/geonode/geonode/static/
sudo chown geonode:www-data /home/geonode/geonode/geonode/uploaded/
chmod -Rf 777 /home/geonode/geonode/geonode/uploaded/thumbs
chmod -Rf 777 /home/geonode/geonode/geonode/uploaded/layers
sudo chown www-data:www-data /home/geonode/geonode/geonode/static_root/
Then restart httpd to make it reload the new configurations:
systemctl restart httpd
To automatically start Apache at boot, run:
systemctl enable httpd
Installing PostgreSQL and PostGIS¶
Install PostgreSQL¶
Install the package for configuring the PGDG repository:
sudo yum install http://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-centos94-9.4-1.noarch.rpm
EPEL repository will provide GDAL packages:
sudo yum install http://mirror.sfo12.us.leaseweb.net/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
Install PostgreSQL, PostGIS and related libs:
sudo yum install -y postgis2_94 postgresql94 postgresql94-server postgresql94-libs postgresql94-contrib \
postgresql94-devel gdal gdal-python geos python-pip python-imaging python-virtualenv python-devel gcc-c++ \
python-psycopg2 libxml2 libxml2-devel libxml2-python libxslt libxslt-devel libxslt-python
Init the DB:
/usr/pgsql-9.4/bin/postgresql94-setup initdb
Enable start on boot:
systemctl enable postgresql-9.4
Start postgres service by hand:
systemctl start postgresql-9.4
To restart or reload the instance, you can use the following commands:
systemctl restart postgresql-9.4
systemctl reload postgresql-9.4
Setting PostgreSQL access¶
Now we are going to change user access policy for local connections in file pg_hba.conf::
sudo vim /var/lib/pgsql/9.4/data/pg_hba.conf
Scroll down to the bottom of the document. We only need to edit one line. Change:
# "local" is for Unix domain socket connections only
local all all peer
into:
# "local" is for Unix domain socket connections only
local all all trust
Note
If your PostgreSQL database resides on a separate machine, you have to allow remote access to the databases in the pg_hba.conf for the geonode user and tell PostgreSQL to accept non local connections in your postgresql.conf file
Once the configuration file has been edited, restart PostgreSQL to make these changes effective:
systemctl restart postgresql-9.4
Create GeoNode Users Databases¶
Switch to postgres user:
su postgres
First create the geonode user. GeoNode is going to use this user to access the database::
createuser -P geonode
You will be prompted asked to set a password for the user. Enter geonode as password
Create geonode database with owner geonode:
createdb -O geonode geonode
And database geonode_data with owner geonode:
createdb -O geonode geonode_data
Create PostGIS extension::
psql -d geonode_data -c 'CREATE EXTENSION postgis;'
Then adjust permissions:
psql -d geonode_data -c 'GRANT ALL ON geometry_columns TO PUBLIC;'
psql -d geonode_data -c 'GRANT ALL ON spatial_ref_sys TO PUBLIC;'
And exit postgres user:
exit
- Linux Admin Intro
- This section describes how to setup a Virtual Machine running Ubuntu.
- GeoNode (vlatest) installation on Ubuntu 14.04
- This section will guide the user through the steps necessary to install GeoNode on Ubuntu.
- GeoNode (vlatest) installation on CentOS 7
This section will guide the user through the steps necessary to install GeoNode on CentOS.
Packaging for automatic installation are provided for Ubuntu, so, the only option for installing GeoNode on a CentOS platform is installing it from source.
Users Workshop¶
Welcome to the GeoNode Training Users Workshop documentation vlatest.
This workshop will teach how to use the GeoNode going in depth into what we can do with software application. At the end of this section you will master all the GeoNode sections and entities from a user perspective.
You will know how to:
- Manage users accounts and how to modify them.
- Use and manage the different GeoNode basic resouces.
- Use the GeoNode searching tools to find your resources.
- Manage Layers and Maps, update the styles and publish them.
- Load datasets into GeoNode and keep them synchronized with GeoServer.
Prerequisites
Before proceeding with the reading, it is strongly recommended to be sure having clear the following concepts:
- GeoNode and Django framework basic concepts
- What is Python
- What is a geospatial server and a basic knowledge of the geospatial web services.
- What is a metadata and a catalog.
- What is a map and a legend.
Accounts and users¶
GeoNode is primarily a social platform, and thus a primary component of any GeoNode instance is the user account. This section will guide you through account registration, updating your account information, and viewing other user accounts.
Creating a new account¶
Before you can save or edit any layers on a GeoNode instance, you need to create an account.
From any page in the web interface, you will see a Register link. Click that link, and the register form will appear
Note
The registrations in GeoNode must be open, in case you don’t see the register link then it’s not possible to register unless the addministrator of the side does that for you.
Sign in screen
On the next page, fill out the form. Enter a user name and password in the fields. Also, enter your email address for verification.
Registering for a new account
You will be returned to the welcome page. An email will be sent confirming that you have signed up. While you are now logged in, you will need to confirm your account. Navigate to the link that was sent in the email.
Confirming your email address
Click Confirm. You will be returned to the homepage.
Managing your profile¶
Your profile contains personal information.
Click on your user name in the top right of the screen. A drop-down list will show. Click on Profile to enter the Profile settings page.
Link to your profile
The next page shows your profile, which is currently empty.
Profile page
Click the Edit profile information link.
Link to edit your profile
On this page, your personal information can be set, including your avatar. Enter some details in the Profile box as well as your city and country info.
Editing your profile
When finished, click Update profile.
Link to save your profile updates
You will be returned to the main profile page. Now click Account settings.
Link to edit your account settings
On this page you can change your email address, time zone, and language. Your email should be populated already, but set the timezone to your current location.
Editing your account
When finished, click Save.
Setting notification preferences¶
By default GeoNode sends notifications to the users for events that the users could be subscribed such as a new layer uploaded or a new rate added to a map.
you can adjust your notification settings by clicking on your user name in the top right of the screen. A drop-down list will show. Click on Notifications to enter the Notifications Settings page.
make sure to have a verified email address to which notices can be sent. If not, click on the proposed link to add one
now check/uncheck the notification types you wish to receive or not receive. It is possible to be notified for the following events:
- Layer Created
- Layer Updated
- Layer Deleted
- Rating for Layer
- Comment for Layer
- Map Created
- Map Updated
- Map Deleted
- Rating for Map
- Comment for Map
- Document Created
- Document Updated
- Document Deleted
- Rating for Document
- Comment for Document
- User following you
- Request to donwload a resource
Viewing other user accounts¶
Now that your account is created, you can view other accounts on the system. Note that on the main profile page there are options for following (and blocking) other users.
![]()
Profile page
To see information about other users on the system, click the People link on the top toolbar. You will see a list of users registered on this system.
List of users
Click on the user name for a particular user. You will see the layers owned by this user.
![]()
List of layers owned by a user
You can also click Activities to see the activity feed.
List of users
If you are interested in keeping track of what this user does, go back to the previous page and click the Follow button.
A confirmation page will display. Click Confirm.
Confirming following a user
You will now be following this user, and your profile page will note this.
Success following a user
Document Types¶
GeoNode welcome page shows a variety of information about the current GeoNode instance. At the top of the page is a toolbar showing quick links to document types: layers, maps and documents.
![]()
Document types in GeoNode welcome page
Data management tools built into GeoNode allow for integrated creation of data, documents, link to external documents, and map visualizations. Each dataset in the system can be shared publicly or restricted to allow access to only specific users. Social features like user profiles and commenting and rating systems allow for the development of communities around each platform to facilitate the use, management, and quality control of the data the GeoNode instance contains.
Layers¶
Layers are a primary component of GeoNode.
Layers are publishable resources representing a raster or vector spatial data source. Layers also can be associated with metadata, ratings, and comments.
By clicking the Layers link you will get a list of all published layers. If logged in as an administrator, you will also see the unpublished layers in the same list.
![]()
Layers in GeoNode toolbar
GeoNode allows the user to upload vector (currently only Shapefiles) and raster data in their original projections using a web form.
Vector data is uploaded in ESRI Shapefile format and satellite imagery and other kinds of raster data are uploaded as GeoTIFFs.
![]()
Layers list in GeoNode
Maps¶
Maps are a primary component of GeoNode.
Maps are comprised of various layers and their styles. Layers can be both local layers in GeoNode as well as remote layers either served from other WMS servers or by web service layers such as Google or MapQuest.
GeoNode maps also contain other information such as map zoom and extent, layer ordering, and style.
By clicking the Map link you will get a list of all published maps.
![]()
Maps in GeoNode toolbar
This toolbar allows you create a map based on the uploaded layers combine them with some existing layers and a remote web service layer, and then share the resulting map for public viewing. Once the data has been uploaded, GeoNode lets the user search for it geographically or via keywords and create maps. All the layers are automatically reprojected to web mercator for maps display, making it possible to use different popular base layers, like Open Street Map, Google Satellite or Bing layers.
Documents¶
As for the layers and maps GeoNode allows to publish tabular and text data manage metadata and associated documents.
By clicking the Documents link you will be brought to the Documents menu where a new subtoolbar can be seen.
![]()
Documents in GeoNode toolbar
Through the document datailed page is possible to view, download and manage a document.
Searching¶
In GeoNode welcome page, click the Search button to bring up the Search page.
![]()
Search tool in GeoNode welcome page
This page contains a wealth of options for customizing a search for various information on GeoNode. This search form allows for much more fine-tuned searches than the simple search box is available at the top of every page.
It is possible to search data by Text, Categories, Type, Keywords, Date, Regions or Extent.
![]()
Search page
Managing layers¶
After user accounts, the next primary component of GeoNode is the layer. Layers are a published resource representing a raster or vector spatial data source. Layers also can be associated with metadata, ratings, and comments.
In this section, you will learn how to create a new layer by uploading a local data set, add layer info, change the style of the layer, and share the results.
Uploading a layer¶
Now that we have taken a tour of GeoNode and viewed existing layers, the next step is to upload our own.
In your data pack is a directory called data
. Inside that directory is
a shapefile called san_andres_y_providencia_administrative.shp
. This is
a data set containing administrative boundaries for the San Andres Province.
This will be the first layer that we will upload to GeoNode.
Navigate to the GeoNode welcome page.
Click the Layers link on the top toolbar. This will bring up the Layers menu.
Main toolbar for GeoNode
Layers menu
Click Upload Layers in the Layers toolbar. This will bring up the upload form
Layers toolbar
Upload Layers form
Fill out the form.
- Click on the Browse… button. This will bring up a local file dialog. Navigate to your data folder and select all of the four files composing the shapefile (
san_andres_y_providencia_administrative.shp
,san_andres_y_providencia_administrative.dbf
,san_andres_y_providencia_administrative.shx
,san_andres_y_providencia_administrative.prj
). Alternatively you could drag and drop the four files in the Drop files here area. - The upload form should appear like this now:
Files ready for upload
- Click on the Browse… button. This will bring up a local file dialog. Navigate to your data folder and select all of the four files composing the shapefile (
GeoNode has the ability to restrict who can view, edit, and manage layers. On the right side of the page, under Who can view and download this data?, select Any registered user. This will ensure that anonymous view access is disabled.
In the same area, under Who can edit this data?, select the Only the following users or groups option and type your username. This will ensure that only you are able to edit the data in the layer.
Permissions for new layer
Click Upload to upload the data and create a layer. A dialog will display showing the progress of the upload.
Upload in progress
Your layer has been uploaded to GeoNode. Now you will be able to access to the its info page (clicking on the Layer Info button), access to its metadata edit form (clicking on the Edit Metadata button) or to manage the styles for it (clicking on the Manage Styles button).
Layer information¶
After upload, another form will displaying, containing metadata about the layer. Change any information as desired, and then click Update at the very bottom of the form.

Layer metadata
After the update, the layer will display in a preview window.

Layer preview
This page contains lots of options for managing this layer. Let’s look at a few of them:
Downloads¶
At the top of the page there are two buttons titled Download Layer and Download Metadata. These buttons provide access to the ability to extract geospatial data and metadata from within GeoNode. In this way, GeoNode allows for two way data and metadata access; one can import as well as export data.
Data¶
- Click the Download Layer button. You will see a list of options of the supported export formats.

Available export formats
- Click the option for Zipped Shapefile.
- GeoNode will process the request and bring up a Save As dialog. Save this file to your computer, and note how it is the same content as was uploaded.
Metadata¶
- Click the Download Metadata button. You will see a list of options of the supported export formats.

Available metadata export formats
- Click the option for DUBLIN CORE.
- GeoNode will process the request and display XML metadata. Try clicking various metadata formats, and note how it is the same metadata content in various formats compatible with metadata and GIS packages.
Layer Detail Tabs¶
Scroll down the page toward the bottom. Five tabs are available: Info, Attributes, Share, Ratings, and Comments. The info tab is already highlighted, and presents basic information about the layer, of the kind that was seen on the layer list page.
Layer Info tab
Click the Attributes tab. This lists the attributes of the layer, including statistics (range, average, median and standard deviation). Layer attribute statistics are made available only for numeric attributes. As we can see, this layer’s attributes are not numeric, so no statistics are calculated.
Attributes tab
Click the Ratings tab. This tab allows you (and others viewing this page) to rate this layer. Ratings can be based on quality, accuracy, or any other metric. Click on the appropriate star to rate this layer.
Layer Ratings tab
Click the Comments tab. This tab allows you to leave a comment for other viewing this layer.
Layer Comments tab
Click the Add Comment button and enter a comment.
Adding a new comment
When finished, click Submit Comments
![]()
New comment posted
Sharing layers¶
GeoNode has the ability to restrict or allow other users to access a layer and share on social media.
Anonymous access¶
Go to the layer preview of the first layer uploaded, and copy the URL to that preview page.
Note
The URL should be something like:
http://GEONODE/layers/geonode:san_andres_y_providencia_administrative
Now log out of GeoNode by clicking on your profile name and selecting Log out.
Log out
When asked for confirmation, click the Log out button.
Confirming log out
Now paste the URL copied about into your browser address bar and navigate to that location.
You will be redirected to the Log In form. This is because when this layer was first uploaded, we set the view properties to be any registered user. Once logged out, we are no longer a registered user and so are not able to see or interact with the layer, unless we log in GeoNode again.
Unable to view this protected layer
To stop this process from happening, you need to ensure that your permissions are set so anyone can view the layer for others to see it on social networks.

- This is done by selecting anyone in the layer permissions tab, be aware this now means your layer is public!
Sharing with social media¶
On the taskbar below your username and profile picture there are three links to social media services, Twitter, Google Plus and Facebook.
Upon clicking on these icons you will be taken through the application’s process for posting to the social network. Ensure the permissions are set so anyone can view the layer if you want unauthenticated to be able to access it.
Adding more layers¶
We’ve uploaded one layer so far. There is one more layer in the data
directory associated with this workshop called san_andres_y_providencia_poi.shp
.
Upload this layer, referring to the directions on uploading a layer. As a difference, leave the permissions set to their default values.
Uploading the layer
Finished upload
Edit Layer Style¶
Edit style task can be performed only by the user who have the permission to do it.
In the Explore Layer page choose a Layer that you want to edit clicking over the name of layer or in the preview window.
In the Edit Layers page click the Edit Layer button.
In the Edit Layer window click Edit button under Style icon. In this interface is it possible to change the style of layers. GeoNode allows to edit layer styles graphically, without the need to resort to programming or requiring a technical background.
In the following example the layer has one style and one rule in that style. Click Edit in Styles menu change Title and Abstract of the selected Style.
Layer Styles window
User Styles window
Click the Rule (Untitled 1) to select it, and then click on Edit below it. Edit the style choosing Basic tab to edit symbology of layers, Labels to add and manage labels and Advanced to manage styles by scale and condition. When done, click Save, then click on the word Layers to return to the layer list.
Basic Style Rule window
Labels Style Rule window
Advanced Style Rule window
In the Edit Layer window click Manage button under Style icon. Manage Styles function allows to assign available style to selected layers.
Manage Layer Styles
Managing maps¶
The next primary component of GeoNode is the map. Maps are comprised of various layers and their styles. Layers can be both local layers in GeoNode as well as remote layers either served from other WMS servers or by web service layers such as Google or MapQuest.
GeoNode maps also contain other information such as map zoom and extent, layer ordering, and style.
In this section, we’ll create a map based on the layers uploaded in the previous section, combine them with some existing layers and a remote web service layer, and then share the resulting map for public viewing.
Creating a map¶
Adding layers¶
Click the Maps link on the top toolbar. This will bring up the list of maps.
Maps page
Currently, there aren’t any maps here, so let’s add one. Click the Create a New Map button.
A map composition interface will display.
Create maps interface
In this interface there is a toolbar, layer list, and map window. The map window contains the MapQuest OpenStreetMap layer by default. There are other service layers available here as well: Blue Marble, Bing Aerial With Labels, MapQuest, and OpenStreetMap.
Click on the New Layers button and select Add Layers.
Add layers link
Select all of the San Andreas layers by clicking the top entry and Shift-clicking the bottom one. Click Add Layers to add them all to the map.
Selecting layers
Note
This selection includes not only the two layers uploaded in the previous section, but also the layers that were already hosted on GeoNode at the beginning of the workshop.
The layers will be added to the map. Click Done (right next to Add Layers at the bottom) to return to the main layers list.
Layers added to the map
Adding external layers¶
Once again, click on the New Layers button and select Add Layers.
Add layers link
From the top dropdown list, select Add a New Server…
Add a New Server
Enter the URL of the server, and select the correct type of server from the dropdown (WMS, TMS, or ArcGIS). For example, enter http://e-atlas.org.au/geoserver/wms for the URL and select Web Map Service as the type. Then click the Add Server button.
New Server URL and Type
Note - for security purposes, the URL you enter must be on a list of pre-approved external services set up by the GeoNode admininistrator. Otherwise you will receive a 403 error when trying to add the server.
A list of layers available from that server should appear momentarily. The layers must be available in the Web Mercator projection or they will not show up in the list. Select the layers you want to add to the map. Click Add Layers to add them all to the map.
Add layers
The layers will be added to the map. Click Done (right next to Add Layers at the bottom) to return to the main layers list.
Layers added to the map
Saving the map¶
While we still have some work to do on our map, let’s save it so that we can come back to it later. Click on the Map button in the toolbar, and select Save Map.
Save map link
Enter a title and abstract for your map.
Save map dialog
Click Save. Notice that the link on the top right of the page changed to reflect the map’s name.
Saved map name
This link contains a permalink to your map. If you open this link in a new window, your map will appear exactly as it was saved.
Styling layers¶
In this interface, we can pause in our map creation and change the style of one of our uploaded layers. GeoNode allows you to edit layer styles graphically, without the need to resort to programming or requiring a technical background.
We’ll be editing the san_andres_y_providencia_poi
layer.
In the layer list, uncheck all of the layers except the above, so that only this one is visible (not including the base layer).
Only one layer visible
Zoom in closer using the toolbar or the mouse.
Zoomed in to see the layer better
In the layer list, click to select the remaining layer and then click the palette icon (Layer Styles). This will bring up the style manager.
Styles manager
This layer has one style (named the same as the layer) and one rule in that style. Click the rule (Untitled 1) to select it, and then click on Edit below it.
Edit style rule link
Edit the style. You can choose from simple shapes, add labels, and even adjust the look of the points based on attribute values and scale.
Editing basic style rules
Editing style labels
When done, click Save, then click on the word Layers to return to the layer list.
Styled layer
Using GeoNode with other applications¶
Your GeoNode project is based on core components which are interoperable and as such, it is straightforward for you to integrate with external applications and services. This section will walk you through how to connect to your GeoNode instance from other applications and how to integrate other services into your GeoNode project. When complete, you should have a good idea about the possibilities for integration, and have basic knowledge about how to accomplish it. You may find it necessary to dive deeper into how to do more complex integration in order to accomplish your goals, but you should feel comfortable with the basics, and feel confident reaching out to the wider GeoNode community for help.
OGC services¶
Since GeoNode is built on GeoServer which is heavily based on OGC services, the main path for integration with external services is via OGC Standards. A large number of systems, applications and services support adding WMS layers to them, but only a few key ones are covered below. WFS and WCS are also supported in a wide variety of clients and platforms and give you access to the actual data for use in GeoProcessing or to manipulate it to meet your requirements. GeoServer also bundles GeoWebCache which produces map tiles that can be added as layers in many popular web mapping tools including Google Maps, Leaflet, OpenLayers and others. You should review the reference material included in the first chapter to learn more about OGC Services and when evaluating external systems make sure that they are also OGC Compliant in order to integrate as seamlessly as possible.
Use GeoNode with:¶
ArcGIS¶
ArcGIS Desktop (ArcMap) supports adding WMS layers to your map project. The following set of steps will walk you through how to configure a WMS Layer from your GeoNode within ArcMap.
First, you can start with a new empty project or add these layers to your existing project.

Next click the ArcCatalog button on the toolbar to bring up its interface.

From there, double click the “Add WMS Server” item in the tree to bring up the dialog that lets you enter the details for your WMS.

Next, enter the URL for your GeoNode’s WMS endpoint which is the base url with /geoserver/wms appended to the end of the URL. You can also enter your credentials into the optional Account section of this dialog to gain access to non-public layers that your user may have access to.

Click the “Get Layers” button to ask ArcMap to query your WMS’s GetCapabilities document to get the list of available layers.

After you click the OK button, your GeoNode layers will appear in the ArcCatalog Interface.

Once your server is configured in ArcMap, you can right click on one of the layers and investigate its properties.

In order to actually add the layer to your project, you can drag and drop it into the Table of Contents, or right click and select “Create Layer”. Your Layer will now be displayed in the map panel of your project.


Once the layer is in your projects Table of Contents, you can right click on it and select the Layer Properties option and select the Styles Tab to choose from the available styles for that layer.

Now that we have seen how to add a WMS layer to our ArcMap project, lets walk through how to add the same layers as a WFS which retrieves the actual feature data from your GeoNode rather than a rendered map as you get with WMS. Adding layers as a WFS gives you more control over how the layers are styled within ArcMap and makes them available for you to use with other ArcGIS tools like the Geoprocessing toolbox.
Note
Adding WFS layers to ArcMap requires that you have the Data Interoperability Extension installed. This extension is not included in ArcMap by default and is licensed and installed separately.
Start by opening up the ArcCatalog Interface within ArcMap and make sure that you have the “Interoperability Connections” option listed in the list.

Next select “Add Interoperability Connection” to bring up the dialog that lets you add the WFS endpoint from your GeoNode.

Select “WFS (Web Feature Service)” in the Format dropdown and enter the URL to the WFS endpoint for your GeoNode in the Dataset field. The WFS endpoint is your base URL + /geoserver/wfs

You will need to click the “Parameters” button to supply more connection information including your credentials which will give you the ability to use private layers that you have access to.

Select the Feature Types button to have ArcMap get a list of layers from the WFS Service of your GeoNode.

Select the layers that you want to add and click OK and ArcMap will import the features from your GeoNode into the system.

Depending on the projection of your data, you may receive a warning about Alignment and Accuracy of data transformations. You can specify the transformation manually or simply hit close to ignore this dialog. If you dont want to be warned again, use the checkboxes in this dialog to hide these warnings temporarily or permanently.

Your WFS Layer will be added to your map and you can view it in the Map Panel. If you need to, use the “Zoom to Layer Extent” or other zoom tools to zoom to the bounds of your layer.

You can now use the identify tool to inspect a feature in your layer, or perform any other function that you can normally use to work with Vector Layers in ArcMap.

Since your layer was imported as actual vector features, you can use normal ArcMap styling tools to style the layer to match how you want it to be displayed.

Now that you have added layers from your GeoNode as both WMS and WFS, you can explore the other options available to you with these layers within ArcMap.
QGIS¶
Quantum GIS or qGIS is an open source, cross platform desktop GIS app. It can also be used to add layers from your GeoNode instance as WMS or WFS. The process is very similar to how we add these same layers to ArcMap, and we will walk through the steps necessary in the following section.
First, select “Add WMS Layer” from the Layer menu.

The Add WMS Layer Dialog will be displayed where you are able to specify the parameters to connect to your WMS server.

Next, you need to fill in the parameters to connect to your GeoNode instance. The URL for your GeoNode’s WMS is the base URL + /geoserver/wms

After clicking the OK button, your server will show up in the list of servers. Make sure its selected, then, click the connect button to have QGIS retrieve the list of layers from your GeoNode.

Select the layers you want to add to your QGIS project and click “Add”.

Your layer will be displayed in the map panel.

You can then zoom into your features in the Map.

From there, you can use the identify tool to inspect the attributes of one of the features on the map.

Or, you can look at the layer metadata by right clicking on the layer and selecting Layer Properties and selecting the metadata tab.

Adding WFS servers and layers to your QGIS project is very similar to adding WMS. Depending on your version of QGIS, you may need to add the WFS plugin. You can use the Plugin manager to add it.

Once the plugin is installed, you can select the “Add WFS Layer” option from the Layer menu.

Step through the same process you did for WMS to create a new WFS connection. First specify server parameters and click OK.

Then click Connect to retrieve the list of layers on the server and select the layers you want to add and click Apply.

The layer(s) you selected will be displayed in the map panel.

You can use the same identify tool to inspect features in the map panel.

To look at more information about your layer, right click the layer in the Table of Contents and select Layer Properties. You can look at the list of fields.

… or set a style to match how you want your data to be displayed.

You now know how to add layers from your GeoNode instance to a QGIS project. You can explore all of the other options available to you in QGIS by consulting its documentation.
Google Earth¶
GeoNode’s built in map interface lets you look at your layers and maps in the Google Earth plugin directly in your browser. You can switch to this 3D viewer directly in GeoNode by clicking the google earth icon in the map panel.

GeoServer will render your layer as an image until you are zoomed in sufficiently, and then it will switch to rendering it as a vector overlay that you can click on to view the attributes for the feature you clicked on.

You can also use this option in the GeoExplorer client by clicking the same button.

Note
Some of the GeoExplorer options will not be available to you when you are in this mode, they will be grayed out an inaccessible.
If instead you want to use layers from your GeoNode in the Google Earth client itself, you have a few options available to you.
First, you can select the KML option from the Download Layer menu to download the entire layer in a single KML file. Depending on the size of the layer, your GeoNode could take several seconds or longer to generate this KML and return it to you.

When the layer is generated, it will be downloaded to your desktop machine and you can simply double click it to open it in Google Earth.

Alternatively, you can use the “View in Google Earth” option in the Layer Download menu to view the layer in Google Earth using the same methodology described above depending on the zoom level.

This will download a small KMZ to your desktop that contains a reference to the layers on the server and you can double click it to open it in Google Earth.

Note
The basic difference between these two options is that the first downloads all of the data to your desktop at once and as such, the downloaded file can be used offline while the second is simply a Network Link to the layer on the server. Choose whichever method is best for your own needs and purposes.
Once you have added your layers to the Places panel in Google Earth, you can move them from the Temporary Places section into My Places if you wish to use them after your current Google Earth session is complete. You can arrange them in folders and use Google Earth functionality to save your project to disk. Consult Google Earths documentation for more information about how to do this.
- Accounts and users
- GeoNode is primarily a social platform, and thus a primary component of any GeoNode instance is the user account. This section will guide you through account registration, updating your account information, and viewing other user accounts.
- Document Types
- GeoNode welcome page shows a variety of information about the current GeoNode instance. At the top of the page is a toolbar showing quick links to document types: layers, maps and documents.
- Searching
- GeoNode advanced Search tool.
- Managing layers
- Create, delete, manage and share Layers on GeoNode.
- Edit Layer Style
- Beautify the Layer using the GeoNode Style editor.
- Managing maps
- Create, delete, manage and share Maps on GeoNode.
- Using GeoNode with other applications
- Your GeoNode project is based on core components which are interoperable and as such, it is straightforward for you to integrate with external applications and services. This section will walk you through how to connect to your GeoNode instance from other applications and how to integrate other services into your GeoNode project. When complete, you should have a good idea about the possibilities for integration, and have basic knowledge about how to accomplish it. You may find it necessary to dive deeper into how to do more complex integration in order to accomplish your goals, but you should feel comfortable with the basics, and feel confident reaching out to the wider GeoNode community for help.
Administrators Workshop¶
Welcome to the GeoNode Training Administrators Workshop documentation vlatest.
This workshop will teach how to install and manage a deployment of the GeoNode software application. At the end of this section you will master all the GeoNode sections and entities from an administrator perspective.
You will know how to:
- Use the GeoNode’s Django Administration Panel.
- Use the console Management Commands for GeoNode.
- Configure and customize your GeoNode installation.
Prerequisites
Before proceeding with the reading, it is strongly recommended to be sure having clear the following concepts:
- GeoNode and Django framework concepts
- Good knowledge of Python
- Good knowledge of what is a geospatial server and geospatial web services.
- Good knowledge of what is metadata and catalog.
- Good knowledge of HTML and CSS.
Usage of the GeoNode’s Django Administration Panel¶
GeoNode has an administration panel based on the Django admin which can be used to do some database operations. Although most of the operations can and should be done through the normal GeoNode interface, the admin panel provides a quick overview and management tool over the database.
It should be highlighted that the sections not covered in this guide are meant to be managed through GeoNode.
Accessing the admin panel¶
Only the staff users (including the superusers) can access the admin interface.
Note
User’s staff membership can be set by the admin panel itself, see how in the Manage users and groups through the admin panel section.
The link to access the admin interface can be found by clicking in the upper right corner on the user name, see figure

Manage users and groups through the admin panel¶
The admin section called Auth has the link to access the Groups while the section called People has the link to access the Users, see figure


Users¶
Adding a user¶
By clicking on the “add” link on the right of the Users link is possible to add a new users to the GeoNode site. A simple form asking for username and password will be presented, see figure

Upon clicking “save” a new form will be presented asking for some personal information and the rights the user should have.
For a normal, not privileged user is enough to just fill the personal information and then confirm with “save”.
If the user has to access the admin panel or be a superuser it’s enough just to tick the “staff” and “superuser” checkboxes.

Changing a user¶
To modify an existing user click on “Users” then on a username in the list. The same form will be presented.

Groups¶
Although the “Groups” permissions system is not implemented yet in GeoNode is possible to create new groups with set of permissions which will be inherited by all the group members.
The creation and management of a Group is done in a very similar way that the user one.
Manage profiles using the admin panel¶
So far GeoNode implements two distinct roles, that can be assigned to resources such as layers, maps or documents:
- party who authored the resource
- party who can be contacted for acquiring knowledge about or acquisition of the resource
This two profiles can be set in the GeoNode interface by accessing the metadata page and setting the “Point of Contact” and “Metadata Author” fields respectively.
Is possible for an administrator to add new roles if needed, by clicking on the “Add Role” button in the “Base” -> “Contact Roles” section:

Clicking on the “People” section (see figure) will open a web for with some personal information plus a section called “Users”.

Is important that this last section is not modified here unless the administrator is very confident in that operation.

Manage the metadata categories using the admin panel¶
In the “Base” section of the admin panel there are the links to manage the metadata categories used in GeoNode

The metadata categories are:
- Regions
- Restriction Code Types
- Spatial Representation Types
- Topic Categories
The other links available should not be used.
Regions¶
The Regions can be updated, deleted and added on needs. Just after a GeoNode fresh installation the regions contain all of the world countries, identified by their ISO code.

Restriction Code Types¶
Being GeoNode strictly tied to the standards, the restrictions cannot be added/deleted or modified in their identifier. This behavior is necessary to keep the consistency in case of federation with the CSW catalogues.
The Restrictions GeoNode description field can in any case be modified if some kind of customisation is necessary, since it’s just the string that will appear on the layer metadata page. If some of the restrictions are not needed within the GeoNode instance, it is possible to hide them by unchecking the “Is choice” field.

Spatial Representation Types¶
For this section the same concepts of the Restriction Code Types applies.

Topic Categories¶
Also for the Topic Categories the only part editable is the GeoNode description. Being standard is assumed that every possible data type will fall under these category identifiers. If some of the categories are not needed within the GeoNode instance, it is possible to hide them by unchecking the “Is choice” field.

Manage layers using the admin panel¶
Some of the layers information can be edited directly through the admin interface although the best place is in the layer -> metadata page in GeoNode.

Is not recommended to modify the Attributes neither the Styles.
Clicking on the Layers link will present a list of layers. By selecting one of them is possible to modify some information like the metadata, the keywords etc. It’s strongly recommended to limit the edits to the metadata and similar information.
Manage the maps using the admin panel¶
Currently the maps admin panel allows more metadata options that the GeoNode maps metadata page. Thus is a good place where to add some more detailed information.

The “Map Layers” section should not be used.
By clicking on a map in the maps list the metadata web form will be presented. Is possible to add or modify the information here. As for the layers, the more specific entries like the layers stack or the map coordinates should not be modified.
Manage the documents using the admin panel¶
As for the layers, most of the information related to the documents can and should be modified using the GeoNode’s document metadata page.

Through the document detail page is possible to edit the metadata information. The fields related to the bounding box or the file attached should not be edited directly.
Management Commands for GeoNode¶
GeoNode comes with administrative commands to help with day to day tasks.
Below is the list of the ones that come from the GeoNode application, the full list can be obtained by doing:
python manage.py help
importlayers¶
Imports a file or folder with geospatial files to GeoNode.
It supports data in Shapefile and GeoTiff format. It also picks up the styles if a .sld
file is present.
Usage:
python manage.py importlayers <data_dir>
Additional options:
Usage: manage.py importlayers [options] path [path...]
Brings a data file or a directory full of data files into a GeoNode site. Layers are added to the Django database, the GeoServer configuration, and the GeoNetwork metadata index.
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Raise on exception
-u USER, --user=USER Name of the user account which should own the imported
layers
-i, --ignore-errors Stop after any errors are encountered.
-o, --overwrite Overwrite existing layers if discovered (defaults
False)
-k KEYWORDS, --keywords=KEYWORDS
The default keywords, separated by comma, for the
imported layer(s). Will be the same for all imported
layers if multiple imports are
done in one command
-c CATEGORY, --category=CATEGORY
The category for the imported
layer(s). Will be the same for all imported layers
if multiple imports are done in one command
-r REGIONS, --regions=REGIONS
The default regions, separated by comma, for the
imported layer(s). Will be the same for all imported
layers if multiple imports are
done in one command
-t TITLE, --title=TITLE
The title for the imported
layer(s). Will be the same for all imported layers
if multiple imports are done in one command
-p, --private Make layer viewable only to owner
--version show program's version number and exit
-h, --help show this help message and exit
updatelayers¶
Update the GeoNode application with data from GeoServer.
This is useful to add data in formats that are not supported in GeoNode by default, and for example to link it it to ArcSDE datastores. The updatelayers command provides several options that can be used to control how layer information is read from GeoServer and updated in GeoNode. Refer to ‘Additional Options’.
Usage:
python manage.py updatelayers
Additional options:
Usage: manage.py updatelayers [options]
Update the GeoNode application with data from GeoServer
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Raise on exception
-i, --ignore-errors Stop after any errors are encountered.
--skip-unadvertised Skip processing unadvertised layers from GeoSever.
--skip-geonode-registered
Just processing GeoServer layers still not registered
in GeoNode.
--remove-deleted Remove GeoNode layers that have been deleted from
GeoSever.
-u USER, --user=USER Name of the user account which should own the imported
layers
-f FILTER, --filter=FILTER
Only update data the layers that match the given
filter
-s STORE, --store=STORE
Only update data the layers for the given geoserver
store name
-w WORKSPACE, --workspace=WORKSPACE
Only update data on specified workspace
--version show program's version number and exit
-h, --help show this help message and exit
fixsitename¶
Uses SITENAME and SITEURL to set the values of the default site object.
This information is used in the page titles and when sending emails from GeoNode, for example, new registrations.
Usage:
python manage.py fixsitename
Additional options:
Usage: manage.py fixsitename [options]
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Raise on exception
--version show program's version number and exit
-h, --help show this help message and exit
Configuring Alternate CSW Backends¶
pycsw is the default CSW server implementation provided with GeoNode. This section will explain how to configure GeoNode to operate against alternate CSW server implementations.
Supported CSW server implementations¶
GeoNode additionally supports the following CSW server implementations:
Since GeoNode communicates with alternate CSW configurations via HTTP, the CSW server can be installed and deployed independent of GeoNode if desired.
Installing the CSW¶
GeoNetwork opensource Installation¶
- Deploy GeoNetwork opensource by downloading geonetwork.war (see http://geonetwork-opensource.org/downloads.html) and deploying into your servlet container
- Follow the instructions at http://geonetwork-opensource.org/manuals/2.6.4/eng/users/quickstartguide/installing/index.html to complete the installation
- test the server with a GetCapabilities request (http://localhost:8080/geonetwork/srv/en/csw?service=CSW&version=2.0.2&request=GetCapabilities)
See http://geonetwork-opensource.org/docs.html for further documentation.
deegree Installation¶
- Deploy deegree by downloading the deegree3 cswDemo .war (see http://wiki.deegree.org/deegreeWiki/DownloadPage) and deploying into your servlet container
- Create a PostGIS-enabled PostgreSQL database
- Follow the instructions at http://wiki.deegree.org/deegreeWiki/deegree3/CatalogueService#Run_your_own_installation to complete the installation
- test the server with a GetCapabilities request (http://localhost:8080/deegree-csw-demo-3.0.4/services?service=CSW&version=2.0.2&request=GetCapabilities)
See http://wiki.deegree.org/deegreeWiki/deegree3/CatalogueService for further documentation.
Customizing GeoNode CSW configuration¶
At this point, the CSW alternate backend is ready for GeoNode integration. GeoNode’s CSW configuration (in geonode/settings.py
) must be updated to point to the correct CSW. The example below exemplifies GeoNetwork as an alternate CSW backend:
# CSW settings
CATALOGUE = {
'default': {
# The underlying CSW implementation
# default is pycsw in local mode (tied directly to GeoNode Django DB)
#'ENGINE': 'geonode.catalogue.backends.pycsw_local',
# pycsw in non-local mode
#'ENGINE': 'geonode.catalogue.backends.pycsw',
# GeoNetwork opensource
'ENGINE': 'geonode.catalogue.backends.geonetwork',
# deegree and others
#'ENGINE': 'geonode.catalogue.backends.generic',
# The FULLY QUALIFIED base url to the CSW instance for this GeoNode
#'URL': '%scatalogue/csw' % SITEURL,
'URL': 'http://localhost:8080/geonetwork/srv/en/csw',
#'URL': 'http://localhost:8080/deegree-csw-demo-3.0.4/services',
# login credentials (for GeoNetwork)
'USER': 'admin',
'PASSWORD': 'admin',
}
}
Customize the look and feel¶
Warning
These instructions are only valid if you’ve installed GeoNode following the guide at Setup & Configure HTTPD !!
You might want to change the look of GeoNode, editing the colors and the logo of the website and adjust the templates for your needs. To do so, you first have to set up your own geonode project from a template. If you’ve successfully done this, you can go further and start theming your geonode project.
Setup steps¶
Warning
These instructions are only valid if you’ve installed GeoNode following the guide at Setup & Configure HTTPD !!
If you are working remotely, you should first connect to the machine that has your GeoNode installation. You will need to perform the following steps in a directory where you intend to keep your newly created project.
1 2 3 4 5 6 7 8 | $ sudo su
$ cd /home/geonode
$ disable_local_repo.sh
$ apt-get install python-django
$ django-admin startproject geonode_custom --template=https://github.com/GeoNode/geonode-project/archive/master.zip -epy,rst
$ chown -Rf geonode: geonode_custom
$ exit
$ sudo pip install -e geonode_custom
|
Note
You should NOT use the name geonode for your project as it will conflict with your default geonode package name.
These commands create a new template based on the geonode example project.
Make sure that the directories are reachable and have the correct rights for the users geonode and www-data:
1 2 | $ sudo chown -Rf geonode: *
$ sudo chmod -Rf 775 geonode_custom
|
If you have a brand new installation of GeoNode, rename the /home/geonode/geonode/local_settings.py.sample to local_settings.py and edit it’s content by setting the SITEURL and SITENAME. This file will be your main settings file for your project. It inherits all the settings from the original one plus you can override the ones that you need.
Note
You can also decide to copy the /home/geonode/geonode/local_settings.py.sample to /path/to/geonode_custom/geonode_custom/local_settings.py in order to keep all the custom settings confined into the new project.
Warning
In order for the edits to the local_settings.py file to take effect, you have to restart apache.
Edit the file /etc/apache2/sites-available/geonode.conf and change the following directive from:
WSGIScriptAlias / /home/geonode/geonode/wsgi/geonode.wsgi
to:
WSGIScriptAlias / /home/geonode/geonode_custom/geonode_custom/wsgi.py
1 2 3 4 5 6 7 8 9 | $ sudo vi /etc/apache2/sites-available/geonode.conf
WSGIScriptAlias / /home/geonode/geonode_custom/geonode_custom/wsgi.py
...
<Directory "/home/geonode/geonode_custom/geonode_custom/">
...
|
Edit the file /etc/apache2/sites-available/geonode.conf and modify the DocumentRoot as follows:
Note
It’s a good practice to make copies and backups of the configuration files before modifying or updating them in order to revert the configuration at the previous state if something goes wrong.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | <VirtualHost *:80>
ServerName http://localhost
ServerAdmin webmaster@localhost
DocumentRoot /home/geonode/geonode_custom/geonode_custom
ErrorLog /var/log/apache2/error.log
LogLevel warn
CustomLog /var/log/apache2/access.log combined
WSGIProcessGroup geonode
WSGIPassAuthorization On
WSGIScriptAlias / /home/geonode/geonode_custom/geonode_custom/wsgi.py
<Directory "/home/geonode/geonode_custom/geonode_custom/">
<Files wsgi.py>
Order deny,allow
Allow from all
Require all granted
</Files>
Order allow,deny
Options Indexes FollowSymLinks
Allow from all
IndexOptions FancyIndexing
</Directory>
...
|
Then regenerate the static JavaScript and CSS files from /path/to/geonode_custom/ and restart apache
1 2 3 4 5 6 | $ cd /home/geonode/geonode_custom
$ python manage.py collectstatic
$ python manage.py syncdb
$ /home/geonode/geonode
$ sudo pip install -e .
$ sudo service apache2 restart
|
Customize the Look & Feel¶
Now you can edit the templates in geonode_custom/templates, the css and images to match your needs like shown in customize.theme_admin!
Note
After going through the theming guide you’ll have to return to this site to execute one more command in order to finish the theming!
When you’ve done the changes, run the following command in the geonode_custom folder:
1 2 | $ cd /home/geonode/geonode_custom
$ python manage.py collectstatic
|
And now you should see all the changes you’ve made to your GeoNode.
Source code revision control¶
It is recommended that you immediately put your new project under source code revision control. The GeoNode development team uses Git and GitHub and recommends that you do the same. If you do not already have a GitHub account, you can easily set one up. A full review of Git and distributed source code revision control systems is beyond the scope of this tutorial, but you may find the Git Book useful if you are not already familiar with these concepts.
Create a new repository in GitHub. You should use the GitHub user interface to create a new repository for your new project.
Creating a new GitHub Repository From GitHub’s Homepage
Specifying new GitHub Repository Parameters
Your new Empty GitHub Repository
Initialize your own repository in the geonode_custom folder:
1
$ sudo git init
Add the remote repository reference to your local git configuration:
1 2 3
$ sudo git remote add origin <https url of your custom repo> https://github.com/geosolutions-it/geonode_custom.git
Add your project files to the repository:
1
$ sudo git add .
Commit your changes:
1 2 3 4 5
# Those two command must be issued ONLY once $ sudo git config --global user.email "geo@geo-solutions.it" $ sudo git config --global user.name "GeoNode Training" $ sudo git commit -am "Initial commit"
Push to the remote repository:
1
$ sudo git push origin master
Further Reading¶
- If you want more information on how to GitHub works and how to contribute to GeoNode project, go to the section “Contributing to GeoNode”
- If you want to customize the Logo and Style of geonode_custom, go to the section “Theming your GeoNode project”
Here below you can find some more details about the custom project structure and info on some of the most important Python files you may want to edit.
The following section is mostly oriented to advanced users and developers.
Project structure¶
Your GeoNode project will now be structured as depicted below:
|-- README.rst
|-- manage.py
|-- geonode_custom
| |-- __init__.py
| |-- settings.py
| |-- local_settings.py
| |-- static
| | |-- README
| | |-- css
| | | |-- site_base.css
| | |-- img
| | | |-- README
| | |-- js
| | |-- README
| |-- templates
| | |-- site_base.html
| | |-- site_index.html
| |-- urls.py
| |-- wsgi.py
|-- setup.py
You can also view your project on GitHub.
![]()
Viewing your project on GitHub
Each of the key files in your project are described below.
manage.py¶
manage.py
is the main entry point for managing your project during
development. It allows running all the management commands from each app in your
project. When run with no arguments, it will list all of the management commands.
settings.py¶
settings.py
is the primary settings file for your project. It imports the settings from the system geonode and adds the local paths. It is quite
common to put all sensible defaults here and keep deployment specific configuration
in the local_settings.py
file. All of the possible settings values and
their meanings are detailed in the Django documentation.
A common paradigm for handing ‘local settings’ (and in other areas where some python module may not be available) is:
- try:
- from local_settings import *
- except:
- pass
This is not required and there are many other solutions to handling varying deployment configuration requirements.
urls.py¶
urls.py
is where your application specific URL routes go. Additionally,
any overrides can be placed here, too.
wsgi.py¶
This is a generated file to make deploying your project to a WSGI server easier.
Unless there is very specific configuration you need, wsgi.py
can be
left alone.
setup.py¶
There are several packaging options in python but a common approach is to place
your project metadata (version, author, etc.) and dependencies in setup.py
.
This is a large topic and not necessary to understand while getting started with GeoNode development but will be important for larger projects and to make development easier for other developers.
static¶
The static
directory will contain your fixed resources: css, html,
images, etc. Everything in this directory will be copied to the final media
directory (along with the static resources from other apps in your project).
templates¶
All of your projects templates go in the templates
directory. While
no organization is required for your project specific templates, when overriding
or replacing a template from another app, the path must be the same as the template
to be replaced.
Warning
These instructions are only valid if you’ve installed GeoNode using apt-get !!
One of the primary reasons to set up your own GeoNode project using this method is so that you can stay in sync with the mainline GeoNode as the core development team makes new releases. Your own project should not be adversely affected by these changes, but you will receive bug fixes and other improvements by staying in sync.
Upgrade GeoNode:
$ apt-get update $ apt-get install geonodeVerify that your new project works with the upgraded GeoNode:
$ python manage.py runserverNavigate to http://localhost:8000.
Warning
These instructions are only valid if you’ve installed GeoNode following the guide at Setup & Configure HTTPD !!
Upgrading from source code repo:
Upgrade GeoNode:
$ cd /home/geonode/geonode $ git pull origin masterVerify that your new project works with the upgraded GeoNode:
$ python manage.py runserverNavigate to http://localhost:8000.
Theming your GeoNode project¶
There are a range of options available to you if you want to change the default look and feel of your GeoNode project. Since GeoNode’s style is based on Bootstrap you will be able to make use of all that Bootstrap has to offer in terms of theme customization. You should consult Bootstrap’s documentation as your primary guide once you are familiar with how GeoNode implements Bootstrap and how you can override GeoNode’s theme and templates in your own project.
Logos and graphics¶
GeoNode intentionally does not include a large number of graphics files in its interface. This keeps page loading time to a minimum and makes for a more responsive interface. That said, you are free to customize your GeoNode’s interface by simply changing the default logo, or by adding your own images and graphics to deliver a GeoNode experience the way you envision int.
Your GeoNode project has a directory already set up for storing your own images at <geonode_custom>/static/img
. You should place any image files that you intend to use for your project in this directory.
Let’s walk through an example of the steps necessary to change the default logo.
Change to the
img
directory:$ cd /home/geonode/geonode_custom/geonode_custom/static/img
If you haven’t already, obtain your logo image. The URL below is just an example, so you will need to change this URL to match the location of your file or copy it to this location:
$ sudo wget http://www2.sta.uwi.edu/~anikov/UWI-logo.JPG $ sudo chown -Rf geonode: .
Change to the
css
directory:$ cd /home/geonode/geonode_custom/geonode_custom/static/css
Override the CSS that displays the logo by editing
<geonode_custom>/static/css/site_base.css
with your favorite editor and adding the following lines, making sure to update the width, height, and URL to match the specifications of your image.$ sudo vi site_base.css
.navbar-brand { width: 373px; height: 79px; background: transparent url("../img/UWI-logo.JPG") no-repeat scroll 15px 0px; }
Restart your GeoNode project and look at the page in your browser:
$ cd /home/geonode $ sudo rm -Rf geonode/geonode/static_root/* $ cd geonode_custom $ python manage.py collectstatic $ sudo service apache2 restart
Note
It is a good practice to cleanup the static_folder and the Browser Cache before reloading in order to be sure that the changes have been correctly taken and displayed on the screen.
Visit your site at http://localhost/ or the remote URL for your site.

Custom logo
You can see that the header has been expanded to fit your graphic. In the following sections you will learn how to customize this header to make it look and function the way you want.
Note
You should commit these changes to your repository as you progress through this section, and get in the habit of committing early and often so that you and others can track your project on GitHub. Making many atomic commits and staying in sync with a remote repository makes it easier to collaborate with others on your project.
Cascading Style Sheets¶
In the last section you already learned how to override GeoNode’s default CSS rules to include your own logo. You are able to customize any aspect of GeoNode’s appearance this way. In the last screenshot, you saw that the main area in the homepage is covered up by the expanded header.
First, we’ll walk through the steps necessary to displace it downward so it is no longer hidden, then change the background color of the header to match the color in our logo graphic.
Reopen
<geonode_custom>/static/css/site_base.css
in your editor and add the following rule after the one added in the previous step:$ cd /home/geonode/geonode_custom/geonode_custom/static/css $ sudo vi site_base.css
#wrap { margin: 75px 75px; }
Add a rule to change the background color of the header to match the logo graphic we used:
.navbar-inverse { background: #0e60c3; }
Your project CSS file should now look like this:
.navbar-brand { width: 373px; height: 79px; background: url(../img/UWI-logo.JPG) no-repeat; } #wrap { margin: 75px 75px; } .navbar-inverse { background: #0e60c3; }
Restart the development server and reload the page:
$ python manage.py collectstatic $ sudo service apache2 restart
CSS overrides
Note
You can continue adding rules to this file to override the styles that are in the GeoNode base CSS file which is built from base.less. You may find it helpful to use your browser’s development tools to inspect elements of your site that you want to override to determine which rules are already applied. See the screenshot below. Another section of this workshop covers this topic in much more detail.

Screenshot of using Chrome’s debugger to inspect the CSS overrides
Templates and static pages¶
Now that we have changed the default logo and adjusted our main content area to fit the expanded header, the next step is to update the content of the homepage itself. Your GeoNode project includes two basic templates that you will use to change the content of your pages.
The file site_base.html
(in <geonode_custom>/templates/
) is the basic template that all other templates inherit from and you will use it to update things like the header, navbar, site-wide announcement, footer, and also to include your own JavaScript or other static content included in every page in your site. It’s worth taking a look at GeoNode’s base file on GitHub. You have several blocks available to you to for overriding, but since we will be revisiting this file in future sections of this workshop, let’s just look at it for now and leave it unmodified.
Open <geonode_custom>/templates/site_base.html
in your editor:
$ cd /home/geonode/geonode_custom/geonode_custom/templates $ sudo vi site_base.html .. code-block:: html {% extends "base.html" %} {% block extra_head %} <link href="{{ STATIC_URL }}css/site_base.css" rel="stylesheet"/> {% endblock %}
You will see that it extends from base.html
, which is the GeoNode template referenced above and it currently only overrides the extra_head
block to include our project’s site_base.css
which we modified in the previous section. You can see on line 22 of the GeoNode base.html template that this block is included in an empty state and is set up specifically for you to include extra CSS files as your project is already set up to do.
Now that we have looked at site_base.html
, let’s actually override a different template.
The file site_index.html
is the template used to define your GeoNode project’s homepage. It extends GeoNode’s default index.html
template and gives you the option to override specific areas of the homepage like the hero area, but also allows you leave area like the “Latest Layers” and “Maps” and the “Contribute” section as they are. You are of course free to override these sections if you choose and this section shows you the steps necessary to do that below.
Open
<geonode_custom>/templates/site_index.html
in your editor.Edit the
<h1>
element on line 9 to say something other than “Welcome”:<h1>{% trans "UWI GeoNode" %}</h1>
Edit the introductory paragraph to include something specific about your GeoNode project:
<p> {% blocktrans %} UWI's GeoNode is setup for students and faculty to collaboratively create and share maps for their class projects. It is maintained by the UWI Geographical Society. {% endblocktrans %} </p>
Change the Getting Started link to point to another website:
<span> For more information about the UWI Geographical society, <a href="http://uwigsmona.weebly.com/">visit our website</a> </span>
Add a graphic to the hero area above the paragraph replaced in step 3:
<img src = 'http://uwigsmona.weebly.com/uploads/1/3/2/4/13241997/1345164334.png'>
Your edited
site_index.html
file should now look like this:{% extends 'index.html' %} {% load i18n %} {% comment %} This is where you can override the hero area block. You can simply modify the content below or replace it wholesale to meet your own needs. {% endcomment %} {% block hero %} <div class="jumbotron"> <div class="container"> <h1>{% trans "UWI GeoNode" %}</h1> <div class="hero-unit-content"/> <div class="intro"> <img src = 'http://uwigsmona.weebly.com/uploads/1/3/2/4/13241997/1345164334.png'> </div> <p> {% blocktrans %} UWI's GeoNode is setup for students and faculty to collaboratively create and share maps for their class projects. It is maintained by the UWI Geographical Society. {% endblocktrans %} </p> <span> For more information about the UWI Geographical society, <a href="http://uwigsmona.weebly.com/">visit our website</a> </span> </div> </div> {% endblock %}
Refresh your GeoNode project and view the changes in your browser at http://localhost/ or the remote URL for your site:
$ python manage.py collectstatic $ sudo service apache2 restart
From here you can continue to customize your site_index.html
template to suit your needs. This workshop will also cover how you can add new pages to your GeoNode project site.
Other theming options¶
You are able to change any specific piece of your GeoNode project’s style by adding CSS rules to site_base.css
, but since GeoNode is based on Bootstrap, there are many pre-defined themes that you can simply drop into your project to get a whole new look. This is very similar to WordPress themes and is a powerful and easy way to change the look of your site without much effort.
Bootswatch¶
Bootswatch is a site where you can download ready-to-use themes for your GeoNode project site. The following steps will show you how to use a theme from Bootswatch in your own GeoNode site.
Visit http://bootswatch.com and select a theme (we will use Sandstone for this example). Select the download bootstrap.css option in the menu:
Put this file into
<geonode_custom>/static/css
.$ cd /home/geonode/geonode_custom/geonode_custom/static/css
Update the
site_base.html
template to include this file. It should now look like this:$ cd /home/geonode/geonode_custom/geonode_custom/templates $ sudo vi site_base.html
{% extends "base.html" %} {% block extra_head %} <link href="{{ STATIC_URL }}css/site_base.css" rel="stylesheet"/> <link href="{{ STATIC_URL }}css/bootstrap.css" rel="stylesheet"/> {% endblock %}
Refresh the development server and visit your site:
$ python manage.py collectstatic $ sudo service apache2 restart
Your GeoNode project site is now using the Sandstone theme in addition to the changes you have made.
- Setup steps
- Setup your own geonode project
- Theming your GeoNode project
- Theme your geonode project
Debugging GeoNode Installations¶
There are several mechanisms to debug GeoNode installations, the most common ones are discussed in the following sections.
Viewing the logs¶
There are many kinds of logs in GeoNode, most of them are located in /var/log/geonode/
and will be explained below in order of relevance:
GeoNode main log: This is the output of the Django application generated by Apache, it may contain detailed information about uploads and high level problems.
The default location is
/var/log/geonode/apache.log
or/var/log/apache2/error.log
.It is set to a very low level (not very much information is logged) by default, but it’s output can be increased by setting the logging level toDEBUG
in/etc/geonode/local_settings.py
.
GeoServer log: It contains most of the information related to problems with data, rendering and styling errors.
This one can be accessed at
GEOSERVER_DATA_DIR/logs/geoserver.log
, which is usually/var/lib/tomcat7/webapps/geoserver/data/logs/geoserver.log
or/var/lib/geoserver/geonode-data/logs/geoserver.log
.It may also be symlinked in
/var/log/geonode/geoserver.log
.Tomcat logs: Tomcat logs could indicate problems loading GeoServer.
They can be found at
/var/lib/tomcat7/logs/catalina.out
or/var/lib/tomcat/geoserver/logs/catalina.out
.PostgreSQL logs: PostgreSQL is accessed by GeoServer and Django, therefore information about errors which are very hard to debug may be found by looking at PostgreSQL’s logs.
They are located at
/var/log/postgresql/postgresql-$(psql_version)-main.log
where $(psql_version) depends on your local installation.
Enabling DEBUG mode¶
Django can be set to return nicely formatted exceptions which are useful for debugging instead of generic 500 errors
.
This is enabled by setting DEBUG=True
in /home/geonode/geonode/geonode/local_settings.py
(or /etc/geonode/local_settings.py
if GeoNode has been installed using apt-get).
After enabling DEBUG, the Apache server has to be restarted for the changes to be picked up. In Ubuntu:
service apache2 restart
Other tips and tricks¶
Modifying GeoServer’s output strategy¶
Up to version 1.1, GeoNode used by default the SPEED
output strategy of GeoServer, this meant that proper error messages were being sacrificed for performance.
Unfortunately, this caused many errors to be masked as XML parsing errors when layers were not properly configured.
It is recommended to verify the output strategy is set at least to PARTIAL_BUFFER2
(or a safer one, e.g. ``FILE``) with a high value for the buffer size.
More information about the different strategies and the performance vs correctness trade off is available at GeoServer’s web.xml file.
The typical location of the file that needs to be modified is /var/lib/tomcat7/webapps/geoserver/WEB-INF/web.xml
as shown below:
<context-param>
<param-name>serviceStrategy</param-name>
<param-value>FILE</param-value>
</context-param>
Add the Django debug toolbar¶
Warning
The Debug Toolbar module must be disabled whe running the server in production (with Apache2 HTTPD Server WSGI)
The django debug toolbar offers a lot of information on about how the page you are seeing is created and used. From the database hits to the views involved. It is a configurable set of panels that display various debug information about the current request/response and when clicked, display more details about the panel’s content.
To install it:
$ pip install django-debug-toolbar
Then edit your settings
/home/geonode/geonode/geonode/settings.py
(or/etc/geonode/settings.py
if GeoNode has been installed using apt-get) and add the following to the bottom of the file:#debug_toolbar settings if DEBUG: INTERNAL_IPS = ('127.0.0.1',) MIDDLEWARE_CLASSES += ( 'debug_toolbar.middleware.DebugToolbarMiddleware', ) INSTALLED_APPS += ( 'debug_toolbar', ) DEBUG_TOOLBAR_PANELS = [ 'debug_toolbar.panels.versions.VersionsPanel', 'debug_toolbar.panels.timer.TimerPanel', 'debug_toolbar.panels.settings.SettingsPanel', 'debug_toolbar.panels.headers.HeadersPanel', 'debug_toolbar.panels.request.RequestPanel', 'debug_toolbar.panels.sql.SQLPanel', 'debug_toolbar.panels.staticfiles.StaticFilesPanel', 'debug_toolbar.panels.templates.TemplatesPanel', 'debug_toolbar.panels.cache.CachePanel', 'debug_toolbar.panels.signals.SignalsPanel', 'debug_toolbar.panels.logging.LoggingPanel', 'debug_toolbar.panels.redirects.RedirectsPanel', ] DEBUG_TOOLBAR_CONFIG = { 'INTERCEPT_REDIRECTS': False, }
Stop Apache and start the server in Development Mode:
$ service apache2 stop $ python manage.py runserver
Redirect the browser to http://localhost:8000. You should be able to see the Debug Panel on the right of the screen.
DJango Debug Toolbar Enabled In Devel Mode
- More:
For more set up and customize the panels read the official docs here
Changing the Default Language¶
GeoNode’s default language is English, but GeoNode users can change the interface language with the pulldown menu at the top-right of most GeoNode pages. Once a user selects a language GeoNode remembers that language for subsequent pages.
GeoNode Configuration¶
As root edit the geonode config file /home/geonode/geonode/geonode/settings.py
(or /etc/geonode/settings.py
if GeoNode has been installed using apt-get) and change LANGUAGE_CODE
to the desired default language.
Note
A list of language codes can be found in the global django config file /usr/local/lib/python2.7/dist-packages/django/conf/global_settings.py
(or /var/lib/geonode/lib/python2.7/site-packages/django/conf/global_settings.py
if GeoNode has been installed using apt-get).
For example, to make French the default language use:
LANGUAGE_CODE = 'fr'
Unfortunately Django overrides this setting, giving the language setting of a user’s browser priority. For example, if LANGUAGE_CODE
is set to French, but the user has configured their operating system for Spanish they may see the Spanish version when they first visit GeoNode.
Additional Steps¶
If this is not the desired behaviour, and all users should initially see the default LANGUAGE_CODE
, regardless of their browser’s settings, do the following steps to ensure Django ignores the browser language settings. (Users can always use the pulldown language menu to change the language at any time.)
As root create a new directory within GeoNode’s site packages:
mkdir /usr/lib/python2.7/dist-packages/setmydefaultlanguage
- or::
- mkdir /var/lib/geonode/lib/python2.7/site-packages/setmydefaultlanguage
if GeoNode has been installed using apt-get
As root create and edit a new file /usr/lib/python2.7/dist-packages/setmydefaultlanguage/__init__.py
and add the following lines:
class ForceDefaultLanguageMiddleware(object):
"""
Ignore Accept-Language HTTP headers
This will force the I18N machinery to always choose settings.LANGUAGE_CODE
as the default initial language, unless another one is set via sessions or cookies
Should be installed *before* any middleware that checks request.META['HTTP_ACCEPT_LANGUAGE'],
namely django.middleware.locale.LocaleMiddleware
"""
def process_request(self, request):
if request.META.has_key('HTTP_ACCEPT_LANGUAGE'):
del request.META['HTTP_ACCEPT_LANGUAGE']
At the end of the GeoNode configuration file /home/geonode/geonode/geonode/settings.py
(or /etc/geonode/settings.py
if GeoNode has been installed using apt-get) add the following lines to ensure the above class is executed:
MIDDLEWARE_CLASSES += (
'setmydefaultlanguage.ForceDefaultLanguageMiddleware',
)
Restart¶
Finally restart Apache2 as root with:
service apache2 restart
Please refer to Translating GeoNode for information on editing GeoNode pages in different languages and create new GeoNode Translations.
Loading Data into a GeoNode¶
This module will walk you through the various options available to load data into your GeoNode from GeoServer, on the command-line or programatically. You can choose from among these techniques depending on what kind of data you have and how you have your geonode setup.
Using importlayers to import Data into GeoNode¶
The geonode.layers app includes 2 management commands that you can use to load or configure data in your GeoNode. Both of these are invoked by using the manage.py script. This section will walk you through how to use the importlayers management command and the subsequent section will lead you through the process of using updatelayers.
The first thing to do is to use the –help option to the importlayers command to investigate the options to this management command. You can display this help by executing the following command:
$ python manage.py importlayers --help
This will produce output that looks like the following:
Usage: manage.py importlayers [options] path [path...]
Brings a data file or a directory full of data files into aGeoNode site. Layers are added to the Django database, theGeoServer configuration, and the GeoNetwork metadata index.
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Print traceback on exception
-u USER, --user=USER Name of the user account which should own the imported
layers
-i, --ignore-errors Stop after any errors are encountered.
-o, --overwrite Overwrite existing layers if discovered (defaults
False)
-k KEYWORDS, --keywords=KEYWORDS
The default keywords for the imported layer(s). Will
be the same for all imported layers if multiple
imports are done in one command
--version show program's version number and exit
-h, --help show this help message and exit
While the description of most of the options should be self explanatory, its worth reviewing some of the key options in a bit more detail.
- The -i option will force the command to stop when it first encounters an error. Without this option specified, the process will skip over errors that have layers and continue loading the other layers.
- The -o option specifies that layers with the same name as the base name will be loaded and overwrite the existing layer.
- The -u option specifies which will be the user that owns the imported layers. The same user will be the point of contact and the metadata author as well for that layer
- The -k option is used to add keywords for all of the layers imported.
The import layers management command is invoked by specifying options as described above and specifying the path to a single layer file or to a directory that contains multiple files. For purposes of this exercise, lets use the default set of testing layers that ship with geonode. You can replace this path with the directory to your own shapefiles:
$ python manage.py importlayers -v 3 /var/lib/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/
This command will produce the following output to your terminal:
Verifying that GeoNode is running ...
Found 8 potential layers.
No handlers could be found for logger "pycsw"
[created] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_administrative.shp' (1/8)
[created] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_coastline.shp' (2/8)
[created] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_highway.shp' (3/8)
[created] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_location.shp' (4/8)
[created] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_natural.shp' (5/8)
[created] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_poi.shp' (6/8)
[created] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_water.shp' (7/8)
[created] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/single_point.shp' (8/8)
Detailed report of failures:
Finished processing 8 layers in 30.0 seconds.
8 Created layers
0 Updated layers
0 Skipped layers
0 Failed layers
3.750000 seconds per layer
If you encounter errors while running this command, you can use the -v option to increase the verbosity of the output so you can debug the problem. The verbosity level can be set from 0-3 with 0 being the default. An example of what the output looks like when an error is encountered and the verbosity is set to 3 is shown below:
Verifying that GeoNode is running ...
Found 8 potential layers.
[failed] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_administrative.shp' (1/8)
[failed] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_coastline.shp' (2/8)
[failed] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_highway.shp' (3/8)
[failed] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_location.shp' (4/8)
[failed] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_natural.shp' (5/8)
[failed] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_poi.shp' (6/8)
[failed] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_water.shp' (7/8)
[failed] Layer for '/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/single_point.shp' (8/8)
Detailed report of failures:
/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/gisdata/data/good/vector/san_andres_y_providencia_administrative.shp
================
Traceback (most recent call last):
File "/Users/jjohnson/projects/geonode/geonode/layers/utils.py", line 682, in upload
keywords=keywords,
File "/Users/jjohnson/projects/geonode/geonode/layers/utils.py", line 602, in file_upload
keywords=keywords, title=title)
File "/Users/jjohnson/projects/geonode/geonode/layers/utils.py", line 305, in save
store = cat.get_store(name)
File "/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/geoserver/catalog.py", line 176, in get_store
for ws in self.get_workspaces():
File "/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/geoserver/catalog.py", line 489, in get_workspaces
description = self.get_xml("%s/workspaces.xml" % self.service_url)
File "/Users/jjohnson/.venvs/geonode/lib/python2.7/site-packages/geoserver/catalog.py", line 136, in get_xml
response, content = self.http.request(rest_url)
File "/Library/Python/2.7/site-packages/httplib2/__init__.py", line 1445, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/Library/Python/2.7/site-packages/httplib2/__init__.py", line 1197, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/Library/Python/2.7/site-packages/httplib2/__init__.py", line 1133, in _conn_request
conn.connect()
File "/Library/Python/2.7/site-packages/httplib2/__init__.py", line 799, in connect
raise socket.error, msg
error: [Errno 61] Connection refused
Note
This last section of output will be repeated for all layers, and only the first one is show above.
This error indicates that GeoNode was unable to connect to GeoServer to load the layers. To solve this, you should make sure GeoServer is running and re-run the command.
If you encounter errors with this command that you cannot solve, you should bring them up on the geonode users mailing list.
You should now have the knowledge necessary to import layers into your GeoNode project from a directory on the servers filesystem and can use this to load many layers into your GeoNode at once.
Note
If you do not use the -u command option, the ownership of the imported layers will be assigned to the primary superuser in your system. You can use GeoNodes Django Admin interface to modify this after the fact if you want them to be owned by another user.
GeoServer Data Configuration¶
While it is possible to import layers directly from your servers filesystem into your GeoNode, you may have an existing GeoServer that already has data in it, or you may want to configure data from a GeoServer which is not directly supported by uploading data.
GeoServer supports a wide range of data formats and connections to database, and while many of them are not supported as GeoNode upload formats, if they can be configured in GeoServer, you can add them to your GeoNode by following the procedure described below.
GeoServer supports 3 types of data: Raster, Vector and Databases. For a list of the supported formats for each type of data, consult the following pages:
- http://docs.geoserver.org/latest/en/user/data/vector/index.html#data-vector
- http://docs.geoserver.org/latest/en/user/data/raster/index.html
- http://docs.geoserver.org/latest/en/user/data/database/index.html
Note
Some of these raster or vector formats or database types require that you install specific plugins in your GeoServer in order to use the. Please consult the GeoServer documentation for more information.
Lets walk through an example of configuring a new PostGIS database in GeoServer and then configuring those layers in your GeoNode.
First visit the GeoServer administration interface on your server. This is usually on port 8080 and is available at http://localhost:8080/geoserver/web/
You should login with the superuser credentials you setup when you first configured your GeoNode instance.
Once you are logged in to the GeoServer Admin interface, you should see the following.

Note
The number of stores, layers and workspaces may be different depending on what you already have configured in your GeoServer.
Next you want to select the “Stores” option in the left hand menu, and then the “Add new Store” option. The following screen will be displayed.

In this case, we want to select the PostGIS store type to create a connection to our existing database. On the next screen you will need to enter the parameters to connect to your PostGIS database (alter as necessary for your own database).

Note
If you are unsure about any of the settings, leave them as the default.
The next screen lets you configure the layers in your database. This will of course be different depending on the layers in your database.

Select the “Publish” button for one of the layers and the next screen will be displayed where you can enter metadata for this layer. Since we will be managing this metadata in GeoNode, we can leave these alone for now.

The things that must be specified are the Declared SRS and you must select the “Compute from Data” and “Compute from native bounds” links after the SRS is specified.

Click save and this layer will now be configured for use in your GeoServer.

The next step is to configure these layers in GeoNode. The updatelayers management command is used for this purpose. As with importlayers, its useful to look at the command line options for this command by passing the –help option:
$ python manage.py updatelayers --help
This help option displays the following:
Usage: manage.py updatelayers [options]
Update the GeoNode application with data from GeoServer
Options:
-v VERBOSITY, --verbosity=VERBOSITY
Verbosity level; 0=minimal output, 1=normal output,
2=verbose output, 3=very verbose output
--settings=SETTINGS The Python path to a settings module, e.g.
"myproject.settings.main". If this isn't provided, the
DJANGO_SETTINGS_MODULE environment variable will be
used.
--pythonpath=PYTHONPATH
A directory to add to the Python path, e.g.
"/home/djangoprojects/myproject".
--traceback Print traceback on exception
-i, --ignore-errors Stop after any errors are encountered.
-u USER, --user=USER Name of the user account which should own the imported
layers
-w WORKSPACE, --workspace=WORKSPACE
Only update data on specified workspace
--version show program's version number and exit
-h, --help show this help message and exit
For this sample, we can use the default options. So enter the following command to configure the layers from our GeoServer into our GeoNode:
$ python manage.py updatelayers
The output will look something like the following:
[created] Layer Adult_Day_Care (1/11)
[created] Layer casinos (2/11)
[updated] Layer san_andres_y_providencia_administrative (3/11)
[updated] Layer san_andres_y_providencia_coastline (4/11)
[updated] Layer san_andres_y_providencia_highway (5/11)
[updated] Layer san_andres_y_providencia_location (6/11)
[updated] Layer san_andres_y_providencia_natural (7/11)
[updated] Layer san_andres_y_providencia_poi (8/11)
[updated] Layer san_andres_y_providencia_water (9/11)
[updated] Layer single_point (10/11)
[created] Layer ontdrainage (11/11)
Finished processing 11 layers in 45.0 seconds.
3 Created layers
8 Updated layers
0 Failed layers
4.090909 seconds per layer
Note
This example picked up 2 additional layers that were already in our GeoServer, but were not already in our GeoNode.
For layers that already exist in your GeoNode, they will be updated and the configuration synchronized between GeoServer and GeoNode.
You can now view and use these layers in your GeoNode.
Using GDAL and OGR to convert your Data for use in GeoNode¶
GeoNode supports uploading data in shapefiles, GeoTiff, csv and kml formats (for the last two formats only if you are using the geonode.importer backend in the UPLOAD variable in settings.py). If your data is in other formats, you will need to convert it into one of these formats for use in GeoNode. This section will walk you through the steps necessary to convert your data into formats suitable for uploading into GeoNode.
You will need to make sure that you have the gdal library installed on your system. On Ubuntu you can install this package with the following command:
$ sudo apt-get install gdal-bin
OGR (Vector Data)¶
OGR is used to manipulate vector data. In this example, we will use MapInfo .tab files and convert them to shapefiles with the ogr2ogr command. We will use sample MapInfo files from the website linked below.
http://services.land.vic.gov.au/landchannel/content/help?name=sampledata
You can download the Admin;(Postcode) layer by issuing the following command:
$ wget http://services.land.vic.gov.au/sampledata/mif/admin_postcode_vm.zip
You will need to unzip this dataset by issuing the following command:
$ unzip admin_postcode_vm.zip
This will leave you with the following files in the directory where you executed the above commands:
|-- ANZVI0803003025.htm
|-- DSE_Data_Access_Licence.pdf
|-- VMADMIN.POSTCODE_POLYGON.xml
|-- admin_postcode_vm.zip
--- vicgrid94
--- mif
--- lga_polygon
--- macedon\ ranges
|-- EXTRACT_POLYGON.mid
|-- EXTRACT_POLYGON.mif
--- VMADMIN
|-- POSTCODE_POLYGON.mid
--- POSTCODE_POLYGON.mif
First, lets inspect this file set using the following command:
$ ogrinfo -so vicgrid94/mif/lga_polygon/macedon\ ranges/VMADMIN/POSTCODE_POLYGON.mid POSTCODE_POLYGON
The output will look like the following:
Had to open data source read-only.
INFO: Open of `vicgrid94/mif/lga_polygon/macedon ranges/VMADMIN/POSTCODE_POLYGON.mid'
using driver `MapInfo File' successful.
Layer name: POSTCODE_POLYGON
Geometry: 3D Unknown (any)
Feature Count: 26
Extent: (2413931.249367, 2400162.366186) - (2508952.174431, 2512183.046927)
Layer SRS WKT:
PROJCS["unnamed",
GEOGCS["unnamed",
DATUM["GDA94",
SPHEROID["GRS 80",6378137,298.257222101],
TOWGS84[0,0,0,-0,-0,-0,0]],
PRIMEM["Greenwich",0],
UNIT["degree",0.0174532925199433]],
PROJECTION["Lambert_Conformal_Conic_2SP"],
PARAMETER["standard_parallel_1",-36],
PARAMETER["standard_parallel_2",-38],
PARAMETER["latitude_of_origin",-37],
PARAMETER["central_meridian",145],
PARAMETER["false_easting",2500000],
PARAMETER["false_northing",2500000],
UNIT["Meter",1]]
PFI: String (10.0)
POSTCODE: String (4.0)
FEATURE_TYPE: String (6.0)
FEATURE_QUALITY_ID: String (20.0)
PFI_CREATED: Date (10.0)
UFI: Real (12.0)
UFI_CREATED: Date (10.0)
UFI_OLD: Real (12.0)
This gives you information about the number of features, the extent, the projection and the attributes of this layer.
Next, lets go ahead and convert this layer into a shapefile by issuing the following command:
$ ogr2ogr -t_srs EPSG:4326 postcode_polygon.shp vicgrid94/mif/lga_polygon/macedon\ ranges/VMADMIN/POSTCODE_POLYGON.mid POSTCODE_POLYGON
Note that we have also reprojected the layer to the WGS84 spatial reference system with the -t_srs ogr2ogr option.
The output of this command will look like the following:
Warning 6: Normalized/laundered field name: 'FEATURE_TYPE' to 'FEATURE_TY'
Warning 6: Normalized/laundered field name: 'FEATURE_QUALITY_ID' to 'FEATURE_QU'
Warning 6: Normalized/laundered field name: 'PFI_CREATED' to 'PFI_CREATE'
Warning 6: Normalized/laundered field name: 'UFI_CREATED' to 'UFI_CREATE'
This output indicates that some of the field names were truncated to fit into the constraint that attributes in shapefiles are only 10 characters long.
You will now have a set of files that make up the postcode_polygon.shp shapefile set. We can inspect them by issuing the following command:
$ ogrinfo -so postcode_polygon.shp postcode_polygon
The output will look similar to the output we saw above when we inspected the MapInfo file we converted from:
INFO: Open of `postcode_polygon.shp'
using driver `ESRI Shapefile' successful.
Layer name: postcode_polygon
Geometry: Polygon
Feature Count: 26
Extent: (144.030296, -37.898156) - (145.101137, -36.888878)
Layer SRS WKT:
GEOGCS["GCS_WGS_1984",
DATUM["WGS_1984",
SPHEROID["WGS_84",6378137,298.257223563]],
PRIMEM["Greenwich",0],
UNIT["Degree",0.017453292519943295]]
PFI: String (10.0)
POSTCODE: String (4.0)
FEATURE_TY: String (6.0)
FEATURE_QU: String (20.0)
PFI_CREATE: Date (10.0)
UFI: Real (12.0)
UFI_CREATE: Date (10.0)
UFI_OLD: Real (12.0)
These files can now be loaded into your GeoNode instance via the normal uploader.
Visit the upload page in your GeoNode, drag and drop the files that composes the shapefile that you have generated using the GDAL ogr2ogr command (postcode_polygon.dbf, postcode_polygon.prj, postcode_polygon.shp, postcode_polygon.shx). Give the permissions as needed and then click the “Upload files” button.

As soon as the import process completes, you will have the possibility to go straight to the layer info page (“Layer Info” button), or to edit the metadata for that layer (“Edit Metadata” button), or to manage the styles for that layer (“Manage Styles”).

GDAL (Raster Data)¶
Now that we have seen how to convert vector layers into shapefiles using ogr2ogr, we will walk through the steps necessary to perform the same operation with Raster layers. For this example, we will work with Arc/Info Binary and ASCII Grid data and convert it into GeoTiff format for use in GeoNode.
First, you need to download the sample data to work with it. You can do this by executing the following command:
$ wget http://84.33.2.26/geonode/sample_asc.tar
You will need to uncompress this file by executing this command:
$ tar -xvf sample_asc.tar
You will be left with the following files on your filesystem:
|-- batemans_ele
| |-- dblbnd.adf
| |-- hdr.adf
| |-- metadata.xml
| |-- prj.adf
| |-- sta.adf
| |-- w001001.adf
| |-- w001001x.adf
|-- batemans_elevation.asc
The file batemans_elevation.asc is an Arc/Info ASCII Grid file and the files in the batemans_ele directory are an Arc/Info Binary Grid file.
You can use the gdalinfo command to inspect both of these files by executing the following command:
$ gdalinfo batemans_elevation.asc
The output should look like the following:
Driver: AAIGrid/Arc/Info ASCII Grid
Files: batemans_elevation.asc
Size is 155, 142
Coordinate System is `'
Origin = (239681.000000000000000,6050551.000000000000000)
Pixel Size = (100.000000000000000,-100.000000000000000)
Corner Coordinates:
Upper Left ( 239681.000, 6050551.000)
Lower Left ( 239681.000, 6036351.000)
Upper Right ( 255181.000, 6050551.000)
Lower Right ( 255181.000, 6036351.000)
Center ( 247431.000, 6043451.000)
Band 1 Block=155x1 Type=Float32, ColorInterp=Undefined
NoData Value=-9999
You can then inspect the batemans_ele files by executing the following command:
$ gdalinfo batemans_ele
And this should be the corresponding output:
Driver: AIG/Arc/Info Binary Grid
Files: batemans_ele
batemans_ele/dblbnd.adf
batemans_ele/hdr.adf
batemans_ele/metadata.xml
batemans_ele/prj.adf
batemans_ele/sta.adf
batemans_ele/w001001.adf
batemans_ele/w001001x.adf
Size is 155, 142
Coordinate System is:
PROJCS["unnamed",
GEOGCS["GDA94",
DATUM["Geocentric_Datum_of_Australia_1994",
SPHEROID["GRS 1980",6378137,298.257222101,
AUTHORITY["EPSG","7019"]],
TOWGS84[0,0,0,0,0,0,0],
AUTHORITY["EPSG","6283"]],
PRIMEM["Greenwich",0,
AUTHORITY["EPSG","8901"]],
UNIT["degree",0.0174532925199433,
AUTHORITY["EPSG","9122"]],
AUTHORITY["EPSG","4283"]],
PROJECTION["Transverse_Mercator"],
PARAMETER["latitude_of_origin",0],
PARAMETER["central_meridian",153],
PARAMETER["scale_factor",0.9996],
PARAMETER["false_easting",500000],
PARAMETER["false_northing",10000000],
UNIT["METERS",1]]
Origin = (239681.000000000000000,6050551.000000000000000)
Pixel Size = (100.000000000000000,-100.000000000000000)
Corner Coordinates:
Upper Left ( 239681.000, 6050551.000) (150d 7'28.35"E, 35d39'16.56"S)
Lower Left ( 239681.000, 6036351.000) (150d 7'11.78"E, 35d46'56.89"S)
Upper Right ( 255181.000, 6050551.000) (150d17'44.07"E, 35d39'30.83"S)
Lower Right ( 255181.000, 6036351.000) (150d17'28.49"E, 35d47'11.23"S)
Center ( 247431.000, 6043451.000) (150d12'28.17"E, 35d43'13.99"S)
Band 1 Block=256x4 Type=Float32, ColorInterp=Undefined
Min=-62.102 Max=142.917
NoData Value=-3.4028234663852886e+38
You will notice that the batemans_elevation.asc file does not contain projection information while the batemans_ele file does. Because of this, lets use the batemans_ele files for this exercise and convert them to a GeoTiff for use in GeoNode. We will also reproject this file into WGS84 in the process. This can be accomplished with the following command.
$ gdalwarp -t_srs EPSG:4326 batemans_ele batemans_ele.tif
The output will show you the progress of the conversion and when it is complete, you will be left with a batemans_ele.tif file that you can upload to your GeoNode.
You can inspect this file with the gdalinfo command:
$ gdalinfo batemans_ele.tif
Which will produce the following output:
Driver: GTiff/GeoTIFF
Files: batemans_ele.tif
Size is 174, 130
Coordinate System is:
GEOGCS["WGS 84",
DATUM["WGS_1984",
SPHEROID["WGS 84",6378137,298.257223563,
AUTHORITY["EPSG","7030"]],
AUTHORITY["EPSG","6326"]],
PRIMEM["Greenwich",0],
UNIT["degree",0.0174532925199433],
AUTHORITY["EPSG","4326"]]
Origin = (150.119938943722502,-35.654598806259330)
Pixel Size = (0.001011114155919,-0.001011114155919)
Metadata:
AREA_OR_POINT=Area
Image Structure Metadata:
INTERLEAVE=BAND
Corner Coordinates:
Upper Left ( 150.1199389, -35.6545988) (150d 7'11.78"E, 35d39'16.56"S)
Lower Left ( 150.1199389, -35.7860436) (150d 7'11.78"E, 35d47' 9.76"S)
Upper Right ( 150.2958728, -35.6545988) (150d17'45.14"E, 35d39'16.56"S)
Lower Right ( 150.2958728, -35.7860436) (150d17'45.14"E, 35d47' 9.76"S)
Center ( 150.2079059, -35.7203212) (150d12'28.46"E, 35d43'13.16"S)
Band 1 Block=174x11 Type=Float32, ColorInterp=Gray
You can then follow the same steps we used above to upload the GeoTiff file we created into the GeoNode, and you will see your layer displayed in the Layer Info page.

Now that you have seen how to convert layers with both OGR and GDAL, you can use these techniques to work with your own data and get it prepared for inclusion in your own GeoNode.
More on Security and Permissions¶
Security and Permissions¶
This tutorial will guide you through the steps that can be done in order to restrict the access on your data uploaded to geonode.
First of all it will be shown how a user can be created and what permissions he can have. Secondly we will take a closer look on to layers, maps and documents and the different opportunities you have in order to ban certain users from viewing or editing your data.
Users¶
Your first step will be to create a user. There are three options to do so, depending on which kind of user you want to create you may choose a different option. We will start with creating a superuser, because this user is the most important. A superuser has all the permissions without explicitly assigning them.
The easiest way to create a superuser (in linux) is to open your terminal and type:
$ python manage.py createsuperuser
You will be asked a username (in this tutorial we will call the superuser you now create your_superuser), an email address and a password.
Now you’ve created a superuser you should become familiar with the Django Admin Interface. As a superuser you are having access to this interface, where you can manage users, layers, permission and more. To learn more detailed about this interface check this LINK. For now it will be enough to just follow the steps. To attend the Django Admin Interface, go to your geonode website and sign in with your_superuser. Once you’ve logged in, the name of your user will appear on the top right. Click on it and the following menu will show up:

Clicking on Admin causes the interface to show up.

Go to Auth -> Users and you will see all the users that exist at the moment. In your case it will only be your_superuser. Click on it, and you will see a section on Personal Info, one on Permissions and one on Important dates. For the moment, the section on Permissions is the most important.

As you can see, there are three boxes that can be checked and unchecked. Because you’ve created a superuser, all three boxes are checked as default. If only the box active would have been checked, the user would not be a superuser and would not be able to access the Django Admin Interface (which is only available for users with the staff status). Therefore keep the following two things in mind:
- a superuser is able to access the Django Admin Interface and he has all permissions on the data uploaded to GeoNode.
- an ordinary user (created from the GeoNode interface) only has active permissions by default. The user will not have the ability to access the Django Admin Interface and certain permissions have to be added for him.
Until now we’ve only created superusers. So how do you create an ordinary user? You have two options:
Django Admin Interface
First we will create a user via the Django Admin Interface because we’ve still got it open. Therefore go back to Auth -> Users and you should find a button on the right that says Add user.
Click on it and a form to fill out will appear. Name the new user test_user, choose a password and click save at the right bottom of the site.
Now you should be directed to the site where you could change the permissions on the user test_user. As default only active is checked. If you want this user also to be able to attend this admin interface you could also check staff status. But for now we leave the settings as they are!
To test whether the new user was successfully created, go back to the GeoNode web page and try to sign in.
GeoNode website
- To create an ordinary user you could also just use the GeoNode website. If you installed GeoNode using a release, you should
see a Register button on the top, beside the Sign in button (you might have to log out before).
Hit the button and again a form will appear for you to fill out. This user will be named geonode_user
By hitting Sign up the user will be signed up, as default only with the status active.
As mentioned before, this status can be changed as well. To do so, sign in with your_superuser again and attend the admin interface. Go again to Auth -> Users, where now three users should appear:
We now want to change the permission of the geonode_user so that he will be able to attend the admin interface as well. Click on to geonode_user and you will automatically be moved to the site where you can change the permissions. Check the box staff status and hit save to store the changes.
To sum it up, we have now created three users with different kind of permissions.
- your_superuser: This user is allowed to attend the admin interface and has all available permissions on layers, maps etc.
- geonode_user: This user is permitted to attend the admin interface, but permissions on layers, maps etc. have to be assigned.
- test_user: This user is not able to attend the admin interface, permissions on layers, maps etc. have also to be assigned.
You should know have an overview over the different kinds of users and how to create and edit them. You’ve also learned about the permissions a certain user has and how to change them using the Django Admin Interface.
Note
If you’ve installed GeoNode in developing mode, the Register button won’t be seen from the beginning. To add this button to the website, you have to change the REGISTRATION_OPEN = False in the settings.py to REGISTRATION_OPEN = True. Then reload GeoNode and you should also be able to see the Register button.
Layers¶
Now that we’ve already created some users, we will take a closer look on the security of layers, how you can protect your data not to be viewed or edited by unwanted users.
Hint
As already mentioned before it is important to know that a superuser does have unrestricted access to all your uploaded data. That means you cannot ban a superuser from viewing, downloading or editing a layer!
The permissions on a certain layer can already be set when uploading your files. When the upload form appears (Layers -> Upload Layer) you will see the permission section on the right side:

As it can be seen here, the access on your layer is split up into three groups:
- view and download data
- edit data
- manage and edit data
The difference between manage and edit layer and edit layer is that a user assigned to edit layer is not able to change the permissions on the layer whereas a user assigned to manage and edit layer can change the permissions. You can now choose whether you want your layer to be viewed and downloaded by
- anyone
- any registered user
- a certain user (or group)
We will now upload our test layer like shown HERE. If you want your layer only be viewed by certain users or a group, you have to choose Only users who can edit in the part Who can view and download this data. In the section Who can edit this data you write down the names of the users you want to have admission on this data. For this first layer we will choose the settings like shown in the following image:
If you now log out, your layer can still be seen, but the unregistered users won’t be able to edit your layer. Now sign in as geonode_user and click on the test layer. Above the layer you can see this:

The geonode_user is able to edit the test_layer. But before going deeper into this, we have to first take a look on another case. As an administrator you might also upload your layers to geoserver and then make them available on GeoNode using updatelayers. Or you even add the layers via the terminal using importlayers (LINK TUTORIAL). To set the permissions on this layer, click on the test layer (you’ve uploaded via updatelayers) and you will see the same menu as shown in the image above. Click Edit layer and the menu will appear.

Choose edit permissions and a window with the permission settings will appear. This window can also be opened by scrolling down the website. On the right-hand side of the page you should be able to see a button like this.
Click on it and you will see the same window as before.
Now set the permissions of this layer using the following settings:
When you assign a user to be able to edit your data, this user is allowed to execute all of the following actions:
- edit metadata
- edit styles
- manage styles
- replace layer
- remove layer
So be aware that each user assigned to edit this layer can even remove it! In our case, only the user test_user and your_superuser do have the rights to do so. Geonode_user is neither able to view nor to download or edit this layer.
Now you are logged in as the user test_user. Below the test_layer you can see the following:
By clicking Edit Layer and Edit Metadata on top of the layer, you can change this information. The test_user is able to change all the metadata of this layer. We now want to change to point of contact, therefore scroll down until you see this:

Change the point of contact from _who_ever_created_this to test_user. Save your changes and you will now be able to see the following:
Warning
If you allow a user to view and download a layer, this user will also be able to edit the styles, even if he is not assigned to edit the layer! Keep this in mind!
To learn how you can edit metadata or change the styles go to this section LINK.
Maps¶
The permission on maps are basically the same as on layers, just that there are fewer options on how to edit the map. Let’s create a map (or already TUTORIAL?). Click on test_map and scroll down till you see this:

Here you can set the same permissions as known from the layer permissions! Set the permissions of this map as seen here:
Save your changes and then log out and log in as test_user. You should now be able to view the test_map and click on to Edit map.

As you may recognize, this user is not able to change the permissions on this map. If you log in as the user geonode_user you should be able to see the button change map permissions when you scroll down the page.
Documents¶
All the same is also valid for your uploaded documents.
Backup & Restore GeoNode - Data Migration¶
The admin command to backup and restore GeoNode, allows to extract consistently the GeoNode and GeoServer data models in a serializable meta-format which is being interpreted later by the restore procedure in order to exactly rebuild the whole structure, accordingly to the current instance version (which may also be different from the starting one).
In particular the tool helps developers and amdins to correctly extract and serialize the following resources are on the storage and deserialize on the target GeoNode/GeoServer instance:
GeoNode (Resource Base Model):
- Layers (both raster and vectors)
- Maps
- Documents
- People with Credentials
- Permissions
- Associated Styles
- Static data and templates
GeoServer (Catalog):
- OWS Services configuration and limits
- Security model along with auth filters configuration, users and credentials
- Workspaces
- Stores (both DataStores and CoverageStores)
- Layers
- Styles
The tool exposes two GeoNode Management Commands, ‘backup’ and ‘restore’.
The commands allow to:
- Fully backup GeoNode data and fixtures on a zip archive
- Fully backup GeoServer configuration (physical datasets - tables, shapefiles, geotiffs)
- Fully restore GeoNode and GeoServer fixtures and catalog from the zip archive
- Migrate fixtures from old GeoNode models to the new one
The usage of those commands is quite easy and straight. It is possible to run the backup and restore commands from the GeoNode Admin panel also.
The first step is to ensure that everything is correctly configured and the requisites respected in order to successfully perform a backup and restore of GeoNode.
Warning
It is worth notice that this functionality requires the latest GeoServer Extension (2.9.x or greater) for GeoNode in order to correctly work.
Note
GeoServer full documentation is also available here GeoServer Docs
Requisites and Setup¶
Before running a GeoNode backup / restore, it is necessary to ensure everything is correctly configured and setup.
Settings¶
Accordingly to the admin needs, the file settings.ini
must be tuned up a bit before running a backup / restore.
It can be found at geonode/base/management/commands/settings.ini
and by default it contains the following properties:
[database]
pgdump = pg_dump
pgrestore = pg_restore
[geoserver]
datadir = /opt/gs_data_dir
dumpvectordata = yes
dumprasterdata = yes
[fixtures]
#NOTE: Order is important
apps = people,account,avatar.avatar,base.backup,base.license,base.topiccategory,base.region,base.resourcebase,base.contactrole,base.link,base.restrictioncodetype,base.spatialrepresentationtype,guardian.userobjectpermission,guardian.groupobjectpermission,layers.uploadsession,layers.style,layers.layer,layers.attribute,layers.layerfile,maps.map,maps.maplayer,maps.mapsnapshot,documents.document,taggit
dumps = people,accounts,avatars,backups,licenses,topiccategories,regions,resourcebases,contactroles,links,restrictioncodetypes,spatialrepresentationtypes,useropermissions,groupopermissions,uploadsessions,styles,layers,attributes,layerfiles,maps,maplayers,mapsnapshots,documents,tags
# Migrate from GN 2.0 to GN 2.4
#migrations = base.resourcebase,layers.layer,layers.attribute,maps.map,maps.maplayer
#manglers = gn20_to_24.ResourceBaseMangler,gn20_to_24.LayerMangler,gn20_to_24.LayerAttributesMangler,gn20_to_24.MapMangler,gn20_to_24.MapLayersMangler
# Migrate from GN 2.4 to GN 2.4
migrations = base.resourcebase,layers.layer,layers.attribute,maps.map,maps.maplayer
manglers = gn24_to_24.ResourceBaseMangler,gn24_to_24.LayerMangler,gn24_to_24.LayerAttributesMangler,gn24_to_24.DefaultMangler,gn24_to_24.MapLayersMangler
The settings.ini
has few different sections that must carefully checked before running a backup / restore command.
Settings: [database] Section¶
[database]
pgdump = pg_dump
pgrestore = pg_restore
This section si quite simple. It contains only two (2) properties:
- pgdump; the path of the
pg_dump
local command. - pgrestore; the path of the
pg_restore
local command.
Warning
Those properties are ignored in case GeoNode is not configured to use a DataBase as backend (see settings.py
and local_settings.py
sections)
Note
Database connection settings (both for GeoNode and GeoServer) will be taken from settings.py
and local_settings.py
configuration files. Be sure they are correctly configured (on the target GeoNode instance too) and the DataBase server is accessible while executing a backup / restore command.
Settings: [geoserver] Section¶
[geoserver]
datadir = /opt/gs_data_dir
dumpvectordata = yes
dumprasterdata = yes
This section allows to enable / disable a full data backup / restore of GeoServer.
- datadir; the full path of GeoServer Data Dir, by default
/opt/gs_data_dir
. The path must be accessible and fully writable by thegeonode
and / orhttpd server
users when executing a backup / restore command. - dumpvectordata; a boolean allowing to disable dump of vectorial data from GeoServer (shapefiles or DB tables). If
false
(orno
) vectorial data won’t be stored / re-stored. - dumprasterdata; a boolean allowing to disable dump of raster data from GeoServer (geotiffs). If
false
(orno
) raster data won’t be stored / re-stored.
Warning
Enabling those options requires that the GeoServer Data Dir is accessible and fully writable by the geonode
and / or httpd server
users when executing a backup / restore command.
Settings: [fixtures] Section¶
[fixtures]
#NOTE: Order is important
apps = people,account,avatar.avatar,base.backup,base.license,base.topiccategory,base.region,base.resourcebase,base.contactrole,base.link,base.restrictioncodetype,base.spatialrepresentationtype,guardian.userobjectpermission,guardian.groupobjectpermission,layers.uploadsession,layers.style,layers.layer,layers.attribute,layers.layerfile,maps.map,maps.maplayer,maps.mapsnapshot,documents.document,taggit
dumps = people,accounts,avatars,backups,licenses,topiccategories,regions,resourcebases,contactroles,links,restrictioncodetypes,spatialrepresentationtypes,useropermissions,groupopermissions,uploadsessions,styles,layers,attributes,layerfiles,maps,maplayers,mapsnapshots,documents,tags
# Migrate from GN 2.0 to GN 2.4
#migrations = base.resourcebase,layers.layer,layers.attribute,maps.map,maps.maplayer
#manglers = gn20_to_24.ResourceBaseMangler,gn20_to_24.LayerMangler,gn20_to_24.LayerAttributesMangler,gn20_to_24.MapMangler,gn20_to_24.MapLayersMangler
# Migrate from GN 2.4 to GN 2.4
migrations = base.resourcebase,layers.layer,layers.attribute,maps.map,maps.maplayer
manglers = gn24_to_24.ResourceBaseMangler,gn24_to_24.LayerMangler,gn24_to_24.LayerAttributesMangler,gn24_to_24.DefaultMangler,gn24_to_24.MapLayersMangler
This section is the most complex one. Usually you don’t need to modify it. Only an expert user who knows Python and GeoNode model structure should modify this section.
What its properties mean:
- apps; this is an ordered list of GeoNode Object Models (or DJango apps). The backup / restore procedure will dump / restore the fixtures in a portable format.
- dumps; this is the list of
files
associated to the DJango apps. The order must be the same of the property above. Each name represents thefile name
where to dump / read the single app fixture. - migrations; some fixtures must be enriched or updated before restored on the target model. This section allows to associate specific manglers to the fixtures. Manglers are simple Python classes which simply converts some attributes to other formats.
- manglers; the Python mangler class to execute accorndingly to the fixture indicated by the migrations property. Manglers classes must be located into he geonode/base/management/commands/lib` folder.
Note
Manglers must be used when migrating from a GeoNode version to another one, i.e. where the original model differs from the target one. With the default distribution are provided manglers to convert from GeoNode 2.0 to GeoNode 2.4. Other versions may require other manglers or updates to the default ones.
Mangler Example¶
As specified on the section above, manglers are Python classes allowing developers to enrich / modify a fixture in order to fit the target GeoNode model.
The structure of a mangler is quite simple. Lets examine the ResourceBaseMangler
of the gn_20_to_24
library, a mangler used to convert a GeoNode 2.0 Resource Base to a GeoNode 2.4 one.
class ResourceBaseMangler(DefaultMangler):
def default(self, obj):
# Let the base class default method raise the TypeError
return json.JSONEncoder.default(self, obj)
def decode(self, json_string):
"""
json_string is basicly string that you give to json.loads method
"""
default_obj = super(ResourceBaseMangler, self).decode(json_string)
# manipulate your object any way you want
# ....
upload_sessions = []
for obj in default_obj:
obj['pk'] = obj['pk'] + self.basepk
obj['fields']['featured'] = False
obj['fields']['rating'] = 0
obj['fields']['popular_count'] = 0
obj['fields']['share_count'] = 0
obj['fields']['is_published'] = True
obj['fields']['thumbnail_url'] = ''
if 'distribution_url' in obj['fields']:
if not obj['fields']['distribution_url'] is None and 'layers' in obj['fields']['distribution_url']:
obj['fields']['polymorphic_ctype'] = ["layers", "layer"]
try:
p = '(?P<protocol>http.*://)?(?P<host>[^:/ ]+).?(?P<port>[0-9]*)(?P<details_url>.*)'
m = re.search(p, obj['fields']['distribution_url'])
if 'http' in m.group('protocol'):
obj['fields']['detail_url'] = self.siteurl + m.group('details_url')
else:
obj['fields']['detail_url'] = self.siteurl + obj['fields']['distribution_url']
except:
obj['fields']['detail_url'] = obj['fields']['distribution_url']
else:
obj['fields']['polymorphic_ctype'] = ["maps", "map"]
try:
obj['fields'].pop("distribution_description", None)
except:
pass
try:
obj['fields'].pop("distribution_url", None)
except:
pass
try:
obj['fields'].pop("thumbnail", None)
except:
pass
upload_sessions.append(self.add_upload_session(obj['pk'], obj['fields']['owner']))
default_obj.extend(upload_sessions)
return default_obj
def add_upload_session(self, pk, owner):
obj = dict()
obj['pk'] = pk
obj['model'] = 'layers.uploadsession'
obj['fields'] = dict()
obj['fields']['user'] = owner
obj['fields']['traceback'] = None
obj['fields']['context'] = None
obj['fields']['error'] = None
obj['fields']['processed'] = True
obj['fields']['date'] = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S")
return obj
It extends the
DefaultMangler
.The
DefaultMangler
is a basic class implementing a JSONDecoderclass DefaultMangler(json.JSONDecoder): def __init__(self, *args, **kwargs): self.basepk = kwargs.get('basepk', -1) self.owner = kwargs.get('owner', 'admin') self.datastore = kwargs.get('datastore', '') self.siteurl = kwargs.get('siteurl', '') super(DefaultMangler, self).__init__(*args) def default(self, obj): # Let the base class default method raise the TypeError return json.JSONEncoder.default(self, obj) def decode(self, json_string): """ json_string is basicly string that you give to json.loads method """ default_obj = super(DefaultMangler, self).decode(json_string) # manipulate your object any way you want # .... return default_obj
By default this mangler unmarshalls GeoNode Object Model from JSON and returns it to the management command.
The GeoNode Object Model can be modified while decoding by extending the
def decode(self, json_string)
method.- json_string; actual parameter contains the JSON representation of the fixture.
- default_obj; is the Python object decoded from the JSON representation of the fixture.
It overrides the
def decode(self, json_string)
method.The decoded Python object can be enriched / modified before returing it to the management command.
From Command Line¶
The following sections shows instructions on how to perform backup / restore from the command line by using the Admin Management Commands.
In order to obtain a basic user guide for the management command from the command line, just run
python manage.py backup --help python manage.py restore --help
--help
will provide the list of available command line options with a brief description.
It is worth notice that both commands allows the following option
python manage.py backup --force / -f python manage.py restore --force / -f
Which will instruct the management command to not ask for confirmation from the user. It enables bascially a non-interactive mode.
Backup¶
In order to perform a backup just run the command:
python manage.py backup --backup-dir=<target_bk_folder_path>
The management command will automatically generate a .zip
archive file on the target folder in case of success.
Restore¶
In order to perform a restore just run the command:
python manage.py restore --backup-file=<target_restore_file_path>
Restore requires the path of one .zip
archive containing the backup fixtures.
Warning
The Restore will overwrite the whole target GeoNode / GeoServer users, catalog and database, so be very carefull.
From GeoNode Admin GUI¶
Login as
admin
and click onAdmin
menu optionLook for
Backups
onBase
sectionAdd a new backup
Insert a
Name
and aDescription
; also you must provide theBase folder
where the backups will be storedWarning
the
Base folder
must be fully writable from bothgeonode
andhttpd server
system users.Click on
save
and go back to the Backups list main sectionThe new Backup is not ready until you perform the
Run Backup
action; in order to do that select the backup to run and from theAction
menu selectRun the Backup
Note
A Backup is not ready until the
Location
attribute is filledClick on
Yes, I'msure
on the next section in order to perform the BackupNote
The server page will wait for the Backup to finish (or fail).
The server page will wait for the Backup to finish (or fail); at the end of the Backup you will be redirected to the main list page.
Note
At a successfull run, the
Location
attribute is filled with the full path of the backup archiveWarning
A Backup can always being updated later and / or executed again. The
Location
attribute will be updated accorndingly.Execute as many Backups as you want; they can all point to the same
Base Folder
, the new backups will generate new unique archive files any time.In order to Restore a zip archive, just select the instance to restore from the list and from the
Action
menu lunch theRun the Restore
option.Click on
Yes, I'msure
on the next section in order to perform the BackupNote
The server page will wait for the Backup to finish (or fail).
Warning
The following target GeoNode folders must be fully writable from both geonode
and httpd server
system users
- geoserver_data_dir/data
- geonode / settings.MEDIA_ROOT
- geonode / settings.STATIC_ROOT
- geonode / settings.STATICFILES_DIRS
- geonode / settings.TEMPLATE_DIRS
- geonode / settings.LOCALE_PATHS
Warning
The Restore will overwrite the whole target GeoNode / GeoServer users, catalog and database, so be very carefull.
- Usage of the GeoNode’s Django Administration Panel
- GeoNode has an administration panel based on the Django admin which can be used to do some database operations. Although most of the operations can and should be done through the normal GeoNode interface, the admin panel provides a quick overview and management tool over the database.
- Management Commands for GeoNode
- GeoNode comes with administrative commands to help with day to day tasks. This section shows the list of the ones that come from the GeoNode application.
- Configuring Alternate CSW Backends
- pycsw is the default CSW server implementation provided with GeoNode. This section will explain how to configure GeoNode to operate against alternate CSW server implementations.
- Customize the look and feel
- You might want to change the look of GeoNode, editing the colors and the logo of the website and adjust the templates for your needs. To do so, you first have to set up your own geonode project from a template. If you’ve successfully done this, you can go further and start theming your geonode project.
- Debugging GeoNode Installations
- There are several mechanisms to debug GeoNode installations, the most common ones are discussed in this section.
- Changing the Default Language
- GeoNode’s default language is English, but GeoNode users can change the interface language with the pulldown menu at the top-right of most GeoNode pages. Once a user selects a language GeoNode remembers that language for subsequent pages.
- Loading Data into a GeoNode
- This module will walk you through the various options available to load data into your GeoNode from GeoServer, on the command-line or programatically. You can choose from among these techniques depending on what kind of data you have and how you have your geonode setup.
- More on Security and Permissions
This tutorial will guide you through the steps that can be done in order to restrict the access on your data uploaded to geonode.
First of all it will be shown how a user can be created and what permissions he can have. Secondly we will take a closer look on to layers, maps and documents and the different opportunities you have in order to ban certain users from viewing or editing your data.
- Backup & Restore GeoNode - Data Migration
- How to perform a full backup / restore of GeoNode and GeoServer catalogs and how to migrate data. Customization backup / restore fixtures and data manglers.
Developers Workshop¶
Welcome to the GeoNode Training Developers Workshop documentation vlatest.
This workshop will teach how to develop with and for the GeoNode software application. This module will introduce you to the components that GeoNode is built with, the standards that it supports and the services it provides based on those standards, and an overview its architecture.
- Prerequisites
- GeoNode is a web based GIS tool, and as such, in order to do development on GeoNode itself or to integrate it into your own application, you should be familiar with basic web development concepts as well as with general GIS concepts.
Introduction to GeoNode development¶
This module will introduce you to the components that GeoNode is built with, the standards that it supports and the services it provides based on those standards, and an overview its architecture.
GeoNode is a web based GIS tool, and as such, in order to do development on GeoNode itself or to integrate it into your own application, you should be familiar with basic web development concepts as well as with general GIS concepts.
A set of reference links on these topics is included at the end of this module.
Standards¶
GeoNode is based on a set of Open Geospatial Consortium (OGC) standards. These standards enable GeoNode installations to be interoperable with a wide variety of tools that support these OGC standards and enable federation with other OGC compliant services and infrastructure. Reference links about these standards are also included at the end of this module.
GeoNode is also based on Web Standards …
Open Geospatial Consortium (OGC) Standards¶
Web Map Service (WMS)¶
The Web Map Service (WMS) specification defines an interface for requesting rendered map images across the web. It is used within GeoNode to display maps in the pages of the site and in the GeoExplorer application to display rendered layers based on default or custom styles.
Web Feature Service (WFS)¶
The Web Feature Service (WFS) specification defines an interface for reading and writing geographic features across the web. It is used within GeoNode to enable downloading of vector layers in various formats and within GeoExplorer to enable editing of Vector Layers that are stored in a GeoNode.
Web Coverage Service (WCS)¶
The Web Coverage Service (WCS) specification defines an interface for reading and writing geospatial raster data as “coverages” across the web. It is used within GeoNode to enable downloading of raster layers in various formats.
Catalogue Service for Web (CSW)¶
The Catalogue Service for Web (CSW) specification defines an interface for exposing a catalogue of geospatial metadata across the web. It is used within GeoNode to enable any application to search GeoNode’s catalogue or to provide federated search that includes a set of GeoNode layers within another application.
Tile Mapping Service (TMS/WMTS)¶
The Tile Mapping Service (TMS) specification defines and interface for retrieving rendered map tiles over the web. It is used within geonode to enable serving of a cache of rendered layers to be included in GeoNode’s web pages or within the GeoExplorer mapping application. Its purpose is to improve performance on the client vs asking the WMS for rendered images directly.
Web Standards¶
HTML¶
HyperText Markup Language, commonly referred to as HTML, is the standard markup language used to create web pages. [1] Web browsers can read HTML files and render them into visible or audible web pages. HTML describes the structure of a website semantically along with cues for presentation, making it a markup language, rather than a programming language.
HTML elements form the building blocks of all websites. HTML allows images and objects to be embedded and can be used to create interactive forms. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes and other items.
The language is written in the form of HTML elements consisting of tags enclosed in angle brackets (like < >). Browsers do not display the HTML tags and scripts, but use them to interpret the content of the page.
HTML can embed scripts written in languages such as JavaScript which affect the behavior of HTML web pages. Web browsers can also refer to Cascading Style Sheets (CSS) to define the look and layout of text and other material. The World Wide Web Consortium (W3C), maintainer of both the HTML and the CSS standards, has encouraged the use of CSS over explicit presentational HTML since 1997.
CSS¶
Cascading Style Sheets (CSS) is a style sheet language used for describing the presentation of a document written in a markup language. [2] Although most often used to set the visual style of web pages and user interfaces written in HTML and XHTML, the language can be applied to any XML document, including plain XML, SVG and XUL, and is applicable to rendering in speech, or on other media. Along with HTML and JavaScript, CSS is a cornerstone technology used by most websites to create visually engaging webpages, user interfaces for web applications, and user interfaces for many mobile applications. [3]
CSS is designed primarily to enable the separation of document content from document presentation, including aspects such as the layout, colors, and fonts. [4] This separation can improve content accessibility, provide more flexibility and control in the specification of presentation characteristics, enable multiple HTML pages to share formatting by specifying the relevant CSS in a separate .css file, and reduce complexity and repetition in the structural content, such as semantically insignificant tables that were widely used to format pages before consistent CSS rendering was available in all major browsers. CSS makes it possible to separate presentation instructions from the HTML content in a separate file or style section of the HTML file. For each matching HTML element, it provides a list of formatting instructions. For example, a CSS rule might specify that “all heading 1 elements should be bold”, leaving pure semantic HTML markup that asserts “this text is a level 1 heading” without formatting code such as a
<bold>
tag indicating how such text should be displayed.This separation of formatting and content makes it possible to present the same markup page in different styles for different rendering methods, such as on-screen, in print, by voice (when read out by a speech-based browser or screen reader) and on Braille-based, tactile devices. It can also be used to display the web page differently depending on the screen size or device on which it is being viewed. Although
REST¶
In computing, Representational State Transfer (REST) is the software architectural style of the World Wide Web. [5] [6] [7] REST gives a coordinated set of constraints to the design of components in a distributed hypermedia system that can lead to a higher performing and more maintainable architecture.
To the extent that systems conform to the constraints of REST they can be called RESTful. RESTful systems typically, but not always, communicate over the Hypertext Transfer Protocol with the same HTTP verbs (GET, POST, PUT, DELETE, etc.) which web browsers use to retrieve web pages and to send data to remote servers. [8] REST interfaces usually involve collections of resources with identifiers, for example
/people/tom
, which can be operated upon using standard verbs, such asDELETE /people/tom
.
[1] | Hypertext Markup Language | Definition of hypertext markup language by Merriam-Webster |
[2] | “CSS developer guide”. Mozilla Developer Network. Retrieved 2015-09-24 |
[3] | “Web-based Mobile Apps of the Future Using HTML 5, CSS and JavaScript”. HTMLGoodies. Retrieved October 2014. |
[4] | “What is CSS?”. World Wide Web Consortium. Retrieved December 2010. |
[5] | Fielding, R. T.; Taylor, R. N. (2000). “Principled design of the modern Web architecture”. pp. 407–416. doi:10.1145/337180.337228. |
[6] | Richardson, Leonard; Sam Ruby (2007), RESTful web service, O’Reilly Media, ISBN 978-0-596-52926-0, retrieved 18 January 2011, The main topic of this book is the web service architectures which can be considered RESTful: those which get a good score when judged on the criteria set forth in Roy Fielding’s dissertation.” |
[7] | Richardson, Leonard; Mike Amundsen (2013), RESTful web APIs, O’Reilly Media, ISBN 978-1-449-35806-8, retrieved 15 September 2015, The Fielding disertation explains the decisions behind the design of the Web.” |
[8] | Fielding, Roy Thomas (2000). “Chapter 5: Representational State Transfer (REST)”. Architectural Styles and the Design of Network-based Software Architectures (Ph.D.). University of California, Irvine. This chapter introduced the Representational State Transfer (REST) architectural style for distributed hypermedia systems. REST provides a set of architectural constraints that, when applied as a whole, emphasizes scalability of component interactions, generality of interfaces, independent deployment of components, and intermediary components to reduce interaction latency, enforce security, and encapsulate legacy systems.” |
Exercises¶
Components and Services¶
Note
Hint, if bash-completion
is installed, try <TAB><TAB> to get completions.
- start/stop services
$ sudo service apache2 $ sudo service apache2 reload $ sudo service tomcat7 $ sudo service postgresql
- basic psql interactions
$ sudo su - postgres $ psql => help # get help => \? # psql specific commands => \l # list databases => \c geonode # switch database => \ds # list tables => \dS layers_layer # describe table
OGC Standards¶
WMS¶
- Use the layer preview functionality in GeoServer to bring up a web map.
- Copy a the URL for the image in the map.
- Alter URL parameters for the request.
- Use curl to get the capabilities document
$ curl 'http://localhost/geoserver/wms?request=getcapabilities'
More: http://docs.geoserver.org/stable/en/user/services/wms/index.html
WFS¶
- Describe a feature type using curl (replace ws:name with your layer)
$ curl 'http://localhost/geoserver/wfs?request=describefeaturetype&name=ws:name
More: http://docs.geoserver.org/stable/en/user/services/wfs/reference.html
Development References¶
Basic Web based GIS Concepts and Background¶
- OGC Services
- Web Application Architecture
- AJAX and REST
- OpenGeo Suite
- GeoServer Administration
- PostgreSQL and PostGIS Administration - http://workshops.opengeo.org/postgis-intro/ - http://workshops.opengeo.org/postgis-spatialdbtips/
Core development tools and libraries¶
- python
- django
- javascript
- jquery
- bootstrap
- geotools/geoscript/geoserver
- geopython
- gdal/ogr
Django Overview¶
This section introduces some basic concepts of DJango, the Python based web framework on top of which GeoNode has been developed.
The main objective of Django is to facilitate the creation of complex sites oriented databases. Django emphasizes reusability and “pluggability” of components, rapid development, and the principle of not repeating yourself. Python is used everywhere, even for settings, files, and data models.
Django also provides an administrative interface to create, read, update and delete models that is dynamically generated by introspection and configured through the Administrative Templates.
Getting Started With Django¶
Object-relational mapper¶
Data models can be defined entirely in Python. DJango makes available a rich, dynamic database-access API for free, but it is still possible to write SQL if needed.
Hint
The following documentation is based on official documentation of the project Django.
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | class Band(models.Model):
"""A model of a rock band."""
name = models.CharField(max_length=200)
can_rock = models.BooleanField(default=True)
class Member(models.Model):
"""A model of a rock band member."""
name = models.CharField("Member's name", max_length=200)
instrument = models.CharField(choices=(
('g', "Guitar"),
('b', "Bass"),
('d', "Drums"),
),
max_length=1
)
band = models.ForeignKey("Band")
|
Models¶
A model is a Python class containing the essential fields and behaviors of the data stored on the DB. Generally, each model maps to a single database table.
- Each model is a Python class that subclasses
django.db.models.Model
. - Each attribute of the model represents a database field.
- A model is an automatically-generated database-access API; see Making queries.
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
This example model defines a Person
, which has a first_name and last_name:
1 2 3 4 5 | from django.db import models
class Person(models.Model):
first_name = models.CharField(max_length=30)
last_name = models.CharField(max_length=30)
|
first_name and last_name are fields of the model. Each field is specified as a class attribute, and each attribute maps to a database column.
The above Person
model would create a database table like this:
1 2 3 4 5 | CREATE TABLE myapp_person (
"id" serial NOT NULL PRIMARY KEY,
"first_name" varchar(30) NOT NULL,
"last_name" varchar(30) NOT NULL
);
|
Some technical notes:
- The name of the table, myapp_person, is automatically derived from some model metadata but can be overridden.
- An id field is added automatically, but this behavior can be overridden.
- The CREATE TABLE SQL in this example is formatted using PostgreSQL syntax, but it’s worth noting Django uses SQL tailored to the database backend specified in the settings file.
Once models have been defined, Django must be instructed on how to use those models. This is possible by editing the DJango settings file and changing the INSTALLED_APPS setting to add the name of the module that contains the model class.
For example, if the models for the application is defined in the module myapp.models, INSTALLED_APPS should read, in part:
1 2 3 4 5 | INSTALLED_APPS = (
#...
'myapp',
#...
)
|
Warning
When you add new apps to INSTALLED_APPS, be sure to run manage.py migrate, optionally making migrations for them first with manage.py makemigrations.
Note
GeoNode uses the specific command manage.py syncdb to perform the models update and migration.
The list of DB fields is reflected (and specified) by the model class attributes.
Warning
Be careful not to choose field names that conflict with the models API like clean, save, or delete.
Example:
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 8 9 10 11 12 | from django.db import models
class Musician(models.Model):
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
instrument = models.CharField(max_length=100)
class Album(models.Model):
artist = models.ForeignKey(Musician)
name = models.CharField(max_length=100)
release_date = models.DateField()
num_stars = models.IntegerField()
|
More: Field Types
Custom methods on a model can be used to add custom “row-level” functionality to an object. This is a valuable technique for keeping business logic in one place.
For example, the following model has a few custom methods:
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 | from django.db import models
class Person(models.Model):
first_name = models.CharField(max_length=50)
last_name = models.CharField(max_length=50)
birth_date = models.DateField()
def baby_boomer_status(self):
"Returns the person's baby-boomer status."
import datetime
if self.birth_date < datetime.date(1945, 8, 1):
return "Pre-boomer"
elif self.birth_date < datetime.date(1965, 1, 1):
return "Baby boomer"
else:
return "Post-boomer"
def _get_full_name(self):
"Returns the person's full name."
return '%s %s' % (self.first_name, self.last_name)
full_name = property(_get_full_name)
|
The last method in this example is a property.
The model instance reference has a complete list of methods automatically given to each model. It is possible to override most of these; see overriding predefined model methods
More: Models Methods
Making queries¶
Django automatically gives a database-abstraction API that allows to create, retrieve, update and delete objects.
As an example:
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 | from django.db import models
class Blog(models.Model):
name = models.CharField(max_length=100)
tagline = models.TextField()
def __str__(self): # __unicode__ on Python 2
return self.name
class Author(models.Model):
name = models.CharField(max_length=50)
email = models.EmailField()
def __str__(self): # __unicode__ on Python 2
return self.name
class Entry(models.Model):
blog = models.ForeignKey(Blog)
headline = models.CharField(max_length=255)
body_text = models.TextField()
pub_date = models.DateField()
mod_date = models.DateField()
authors = models.ManyToManyField(Author)
n_comments = models.IntegerField()
n_pingbacks = models.IntegerField()
rating = models.IntegerField()
def __str__(self): # __unicode__ on Python 2
return self.headline
|
As already said before, a model class represents a database table, and an instance of that class represents a particular record in the database table.
To create an object, instantiate it using keyword arguments to the model class, then call save() to save it to the database.
Assuming models live in a file mysite/blog/models.py, here’s an example:
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 | >>> from blog.models import Blog
>>> b = Blog(name='Beatles Blog', tagline='All the latest Beatles news.')
>>> b.save()
|
This performs an INSERT SQL statement behind the scenes. Django doesn’t hit the database until you explicitly call save(). The save() method has no return value.
1 2 | >>> b5.name = 'New name'
>>> b5.save()
|
This performs an UPDATE SQL statement behind the scenes. Django doesn’t hit the database until you explicitly call save().
Retrieving objects from the database can be done by constructing a QuerySet via a Manager on the model class.
A QuerySet represents a collection of objects from the database. It can have zero, one or many filters. Filters narrow down the query results based on the given parameters. In SQL terms, a QuerySet equates to a SELECT statement, and a filter is a limiting clause such as WHERE or LIMIT.
Each model has at least one Manager, and it’s called objects
by default.
It can be accessed directly via the model class, like so:
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 8 | >>> Blog.objects
<django.db.models.manager.Manager object at ...>
>>> b = Blog(name='Foo', tagline='Bar')
>>> b.objects
Traceback:
...
AttributeError: "Manager isn't accessible via Blog instances."
Note
|
Managers are accessible only via model classes, rather than from model instances, to enforce a separation between “table-level” operations and “record-level” operations.
The Manager is the main source of QuerySets for a model. For example, Blog.objects.all()
returns a QuerySet that contains all Blog objects in the database.
The simplest way to retrieve objects from a table is to get all of them. To do this, use the all() method on a Manager:
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 | >>> all_entries = Entry.objects.all()
|
The all() method returns a QuerySet of all the objects in the database. The QuerySet returned by all() describes all objects in the database table. To select only a subset of the complete set of objects, it must be refined by adding filter conditions.
The two most common ways to refine a QuerySet are:
filter(**kwargs)
Returns a new QuerySet containing objects that match the given lookup parameters.
exclude(**kwargs)
Returns a new QuerySet containing objects that do not match the given lookup parameters. The lookup parameters (**kwargs in the above function definitions) should be in the format described in Field lookups below.
For example, to get a QuerySet of blog entries from the year 2006, use filter() like so:
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 | Entry.objects.filter(pub_date__year=2006)
|
With the default manager class, it is the same as:
1 | Entry.objects.all().filter(pub_date__year=2006)
|
The result of refining a QuerySet is itself a QuerySet, so it’s possible to chain refinements together.
For example:
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 | >>> Entry.objects.filter(
... headline__startswith='What'
... ).exclude(
... pub_date__gte=datetime.date.today()
... ).filter(
... pub_date__gte=datetime(2005, 1, 30)
... )
|
This takes the initial QuerySet of all entries in the database, adds a filter, then an exclusion, then another filter. The final result is a QuerySet containing all entries with a headline that starts with “What”, that were published between January 30, 2005, and the current day.
More: Making queries
URLs and views¶
A clean elegant URL scheme is an important detail in a high-quality Web application. Django encourages beautiful URL design and does not put junk in URLs, like .php or .asp.
In DJango a Python module called URLconf is like a table of contents for the application. It contains a simple mapping between URL patterns and the views.
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 8 | from django.conf.urls import url
from . import views
urlpatterns = [
url(r'^bands/$', views.band_listing, name='band-list'),
url(r'^bands/(\d+)/$', views.band_detail, name='band-detail'),
url(r'^bands/search/$', views.band_search, name='band-search'),
]
|
1 2 3 4 5 6 | from django.shortcuts import render
def band_listing(request):
"""A view of all bands."""
bands = models.Band.objects.all()
return render(request, 'bands/band_listing.html', {'bands': bands})
|
More: URL dispatcher
Templates¶
Django’s template language allows developers to put logic into the HTML:
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | <html>
<head>
<title>Band Listing</title>
</head>
<body>
<h1>All Bands</h1>
<ul>
{% for band in bands %}
<li>
<h2><a href="{{ band.get_absolute_url }}">{{ band.name }}</a></h2>
{% if band.can_rock %}<p>This band can rock!</p>{% endif %}
</li>
{% endfor %}
</ul>
</body>
</html>
|
More: Templates
Forms¶
Django provides a library that handles rendering HTML forms, validation of data submitted by users, and converting the data to native Python types. Django also provides a way to generate forms from your existing models and to use these forms to create and update data.
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 | from django import forms
class BandContactForm(forms.Form):
subject = forms.CharField(max_length=100)
message = forms.CharField()
sender = forms.EmailField()
cc_myself = forms.BooleanField(required=False)
|
GET and POST¶
GET and POST are the only HTTP methods to use when dealing with forms.
Django’s login form is returned using the POST method, in which the browser bundles up the form data, encodes it for transmission, sends it to the server, and then receives back its response.
GET, by contrast, bundles the submitted data into a string, and uses this to compose a URL. The URL contains the address where the data must be sent, as well as the data keys and values.
You can see this in action if you do a search in the Django documentation, which will produce a URL of the form https://docs.djangoproject.com/search/?q=forms&release=1
.
GET and POST are typically used for different purposes.
Any request that could be used to change the state of the system - for example, a request that makes changes in the database - should use POST. GET should be used only for requests that do not affect the state of the system.
GET would also be unsuitable for a password form, because the password would appear in the URL, and thus, also in browser history and server logs, all in plain text. Neither would it be suitable for large quantities of data, or for binary data, such as an image. A Web application that uses GET requests for admin forms is a security risk: it can be easy for an attacker to mimic a form’s request to gain access to sensitive parts of the system. POST, coupled with other protections like Django’s CSRF protection offers more control over access.
On the other hand, GET is suitable for things like a web search form, because the URLs that represent a GET request can easily be bookmarked, shared, or resubmitted.
More: Working With Forms
Authentication¶
Django supports a full-featured and secure authentication system. It handles user accounts, groups, permissions and cookie-based user sessions.
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 | from django.contrib.auth.decorators import login_required
from django.shortcuts import render
@login_required
def my_protected_view(request):
"""A view that can only be accessed by logged-in users"""
return render(request, 'protected.html', {'current_user': request.user})
|
Admin¶
One of the most powerful parts of Django is its automatic admin interface. It reads metadata from models in order to provide a powerful and ready-to-use GUI for CRUD operations against the model.
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 8 9 10 | from django.contrib import admin
from bands.models import Band, Member
class MemberAdmin(admin.ModelAdmin):
"""Customize the look of the auto-generated admin for the Member model"""
list_display = ('name', 'instrument')
list_filter = ('band',)
admin.site.register(Band) # Use the default options
admin.site.register(Member, MemberAdmin) # Use the customized options
|
Note
The advanced workshop for Developers will provide more details on GeoNode specific models and admin interface
More: The Django admin site
Internationalization¶
Django offers full support for translating text into different languages, plus locale-specific formatting of dates, times, numbers and time zones. It lets developers and template authors specify which parts of their apps should be translated or formatted for local languages and cultures, and it uses these hooks to localize Web applications for particular users according to their preferences.
Note
The following examples are taken from the official Django documentation for the sole purpose of introducing the general concepts.
1 2 3 4 5 6 7 8 9 10 | from django.shortcuts import render
from django.utils.translation import ugettext
def homepage(request):
"""
Shows the homepage with a welcome message that is translated in the
user's language.
"""
message = ugettext('Welcome to our site!')
return render(request, 'homepage.html', {'message': message})
|
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 | {% load i18n %}
<html>
<head>
<title>{% trans 'Homepage - Hall of Fame' %}</title>
</head>
<body>
{# Translated in the view: #}
<h1>{{ message }}</h1>
<p>
{% blocktrans count member_count=bands.count %}
Here is the only band in the hall of fame:
{% plural %}
Here are all the {{ member_count }} bands in the hall of fame:
{% endblocktrans %}
</p>
<ul>
{% for band in bands %}
<li>
<h2><a href="{{ band.get_absolute_url }}">{{ band.name }}</a></h2>
{% if band.can_rock %}<p>{% trans 'This band can rock!' %}</p>{% endif %}
</li>
{% endfor %}
</ul>
</body>
</html>
|
Note
The advanced workshop for Developers will provide more details on how to create languages and translations on GeoNode using Transifex
Security¶
Django provides multiple protections against:
- Clickjacking
Clickjacking is a type of attack where a malicious site wraps another site in a frame. This attack can result in an unsuspecting user being tricked into performing unintended actions on the target site.
The X-Frame-Options middleware contained in a form allow a supporting browser to prevent a site from being rendered inside a frame
- Cross site scripting (XSS)
XSS attacks allow a user to inject client side scripts into the browsers of other users. This is usually achieved by storing the malicious scripts in the database where it will be retrieved and displayed to other users, or by getting users to click a link which will cause the attacker’s JavaScript to be executed by the user’s browser. However, XSS attacks can originate from any untrusted source of data, such as cookies or Web services, whenever the data is not sufficiently sanitized before including in a page.
- Cross site request forgery (CSRF)
CSRF attacks allow a malicious user to execute actions using the credentials of another user without that user’s knowledge or consent.
CSRF protection works by checking for a nonce in each POST request. This ensures that a malicious user cannot simply “replay” a form POST to your Web site and have another logged in user unwittingly submit that form. The malicious user would have to know the nonce, which is user specific (using a cookie).
- SQL injection
SQL injection is a type of attack where a malicious user is able to execute arbitrary SQL code on a database. This can result in records being deleted or data leakage.
- Host header validation
Django uses the Host header provided by the client to construct URLs in certain cases. While these values are sanitized to prevent Cross Site Scripting attacks, a fake Host value can be used for Cross-Site Request Forgery, cache poisoning attacks, and poisoning links in emails.
Because even seemingly-secure web server configurations are susceptible to fake Host headers, Django validates Host headers against the ALLOWED_HOSTS setting in the
django.http.HttpRequest.get_host()
method.This validation only applies via get_host(); if your code accesses the Host header directly from request.META you are bypassing this security protection.
- SSL/HTTPS
It is always better for security, though not always practical in all cases, to deploy your site behind HTTPS. Without this, it is possible for malicious network users to sniff authentication credentials or any other information transferred between client and server, and in some cases – active network attackers – to alter data that is sent in either direction.
Django provides some settings to secure your site unser SSL/HTTPS.
Warning
While Django provides good security protection out of the box, it is still important to properly deploy your application and take advantage of the security protection of the Web server, operating system and other components.
- Make sure that your Python code is outside of the Web server’s root. This will ensure that your Python code is not accidentally served as plain text (or accidentally executed).
- Take care with any user uploaded files.
- Django does not throttle requests to authenticate users. To protect against brute-force attacks against the authentication system, you may consider deploying a Django plugin or Web server module to throttle these requests.
- Keep your SECRET_KEY a secret.
- It is a good idea to limit the accessibility of your caching system and database using a firewall.
More: Security in Django
Development Prerequsites and Core Modules¶
This module will introduce you to the basic tools and skills required to start actively developing GeoNode.
GeoNode’s Development Prerequisites¶
Basic Shell Tools¶
ssh and sudo¶
ssh and sudo are very basic terminal skills which you will need to deploy, maintain and develop with GeoNode. If you are not already familiar with their usage, you should review the basic descriptions below and follow the external links to learn more about how to use them effectively as part of your development workflow.
ssh is the network protocol used to connect to a remote server where you run your GeoNode instance whether on your own network or on the cloud. You will need to know how to use an the ssh command from the terminal on your unix machine or how to use a ssh client like putty or winscp on windows. You may need to use pki certificates to connect to your remove server, and should be familiar with the steps and options necessary to connect this way. More information about ssh can be found in the links below.
sudo is the command used to execute a terminal command as the superuser when you are logged in with a normal user. You will to use sudo in order to start, stop and restart key services on your GeoNode instance. If you are not able to grant yourself these privileges on the machine you are using for your GeoNode instance, you may need to consult with your network administrator to arrange for your user to be granted sudo permissions. More information about sudo can be found in the links below.
bash¶
Bash is the most common unix shell which will usually be the default on servers where you will be deploying your GeoNode instance. You should be familiar with the most common bash commands in order to be able to deploy, maintain and modify a geonode instance. More information about Bash and common bash commands can be found in the links below.
apt¶
apt is the packaging tool that is used to install GeoNode on ubuntu and other debian based systems. You will need to be familiar with adding Personal Package Archives to your list of install sources, and will need to be familiar with basic apt commands. More information about apt can be found in the links below.
Python Development Tools¶
The GeoNode development process relies on several widely used python development tools in order to make things easier for developers and other users of the systems that GeoNode developers work on or where GeoNodes are deployed. They are considered best practices for modern python development, and you should become familiar with these basic tools and be comfortable using them on your own projects and systems.
virtualenv¶
virtualenv is a tool used to create isolated python development environments such that the the versions of project dependencies are sandboxed from the system-wide python packages. This eliminates the commonly encountered problem of different projects on the same system using different versions of the same library. You should be familiar with how to create and activate virtual environments for the projects you work on. More information about virtualenv can be found in the links below.
virtualenvwrapper is a wrapper around the virtualenv package that makes it easier to create and switch between virtual environments as you do development. Using it will make your life much easier, so its recommended that you install and configure it and use its commands as part of your virtualenv workflow. More info about virtualenvwrapper can be found in the links below.
pip¶
pip is a tool for installing and managing python packages. Specifically it is used to install and upgrade packages found in the Python Pacakge Index. GeoNode uses pip to install itself, and to manage all of the python dependencies that are needed as part of a GeoNode instance. As you learn to add new modules to your geonode, you will need to become familiar with the use of pip and about basic python packaging usage. More information about pip can be found in the links below.
miscellaneous¶
ipython is a set of tools to make your python development and debugging experience easier. The primary tool you want to use is an interactive shell that adds introspection, integrated help and command completion and more. While not strictly required to do GeoNode development, learning how to use ipython will make your development more productive and pleasant. More information about ipython can be found in the links below.
- http://ipython.org/
- http://pypi.python.org/pypi/ipython
- https://github.com/ipython/ipython
- http://en.wikipedia.org/wiki/IPython
pdb is a standard python module that is used to interactively debug your python code. It supports setting conditional breakpoints so you can step through the code line by line and inspect your variables and perform arbitrary execution of statements. Learning how to effectively use pdb will make the process of debugging your application code significantly easier. More information about pdb can be found in the links below.
Django¶
GeoNode is built on top of the Django web framework, and as such, you will need to become generally familiar with Django itself in order to become a productive GeoNode developer. Django has excellent documentation, and you should familiarize yourself with Django by following the Django workshop and reading through its documentation as required.
Model Template View¶
Django is based on the Model Template View paradigm (more commonly called Model View Controller). Models are used to define objects that you use in your application and Django’s ORM is used to map these models to a database. Views are used to implement the business logic of your application and provide objects and other context for the templates. Templates are used to render the context from views into a page for display to the user. You should become familiar with this common paradigm used in most modern web frameworks, and how it is specifically implemented and used in Django. The Django tutorial itself is a great place to start. More information about MTV in Django can be found in the links below.
HTTP Request Response¶
Django and all other web frameworks are based on the HTTP Request Response cycle. Requests come in to the server from remote clients which are primarily web browsers, but also can be api clients, and the server returns with a Response. You should be familiar with these very basic HTTP principles and become familiar with the way that Django implements them. More information about HTTP, Requests and Responses and Djangos implementation in the links below.
Management Commands¶
Django projects have access to a set of management commands that are used to manage your project. Django itself provides a set of these commands, and django apps (including GeoNode) can provide their own. Management commands are used to do things like synchronize your models with your database, load data from fixtures or back up your database with fixtures, start the development server, initiate the debugger and many other things. GeoNode provides management commands for synchronizing with a GeoServer or updating the layers already in your GeoNode. You should become familiar with the basic management commands that come with Django, and specifically with the commands that are part of GeoNode. The GeoNode specific commands are covered in section. More information about management commands can be found in the links below.
Django Admin Interface¶
Django provides a build-in management console that administrators and developers can use to look at the data in the database that is part of the installed applications. Administrators can use this console to perform many common administration tasks that are a necessary part of running a GeoNode instance, and as a developer, you will use this interface during your development process to inspect the database and the data stored in your models. More information about the django admin interface can be found in the links below.
Template Tags¶
Django templates make use of a set of tags to inject, filte and format content into a rendered HTML page. Django itself includes a set of built-in template tags and filters that you will use in your own templates, and GeoNode provides a geonode specific set of tags that are used in the GeoNode templates. You should become familiar with the built-in tag set and with GeoNode’s specific tags as you work on developing your own templates or extending from GeoNode’s. More information about Django template tags can be found in the links below.
GeoNode’s Core Modules¶
GeoNode is made up of a set of core Django pluggable modules (known as apps in Django) that provide the functionality of the application. Together they make up the key components of a GeoNode site. While your own use case and implementation may not require that you work directly on these modles, it is important that you become familiar with their layout, structure and the functionality that they provide. You may need to import these apps into your own apps, and as such, becoming familiar with them is an important step in becoming a proficient GeoNode developer.
geonode.layers¶
geonode.layers is the most key GeoNode module. It is used to represent layers of data stored in a GeoNode’s paired GeoServer. The layer model class inherits fields from the ResourceBase class which provides all of the fields necessary for the metadata catalogue, and adds fields that map the object to its corresponding layer in GeoServer. When your users upload a layer via the user interface, the layer is imported to GeoServer and a record is added to GeoNode’s database to represent that GeoServer layer within GeoNode itself.
The Layer model class provides a set of helper methods that are used to perform operations on a Layer object, and also to return things such as the list of Download or Metadata links for that layer. Additional classes are used to model the layers Attributes, Styles, Contacts and Links. The Django signals framework is used to invoke specific functions to synchronize with GeoServer before and after the layer is saved.
The views in the layers app are used to perform functions such as uploading, replacing, removing or changing the points of contact for a layer, and views are also used to update layer styles, download layers in bulk or change a layers permissions.
The forms module in the layer app is used to drive the user interface forms necessary for performing the business logic that the views provide.
The Layers app also includes a set of templates that are paired with views and used to drive the user interface. A small set of layer template tags is also used to help drive the layer explore and search pages.
Some helper modules such as geonode.layers.metadata and geonode.layers.ows are used by the layer views to perform specific functions and help keep the main views module more concise and legible.
Additionally, the GeoNode specific management commands are a part of the geonode.layers app.
You should spend some time to review the layers app through GitHubs code browsing interface.
https://github.com/GeoNode/geonode/tree/master/geonode/layers
geonode.maps¶
The geonode.maps app is used to group together GeoNodes multi layer map functionality. The Map and MapLayer objects are used to model and implement maps created with the GeoExplorer application. The Map object also extends from the ResourceBase class which provides the ability to manage a full set of metadata fields for a Map.
The views in the maps app perform many of the same functions as the views in the layers app such as adding, changing, replacing or removing a map and also provide the endpoints for returning the map configuration from the database that is used to initialize the GeoExplorer app.
The maps app also includes a set of forms, customization of the Django admin, some utility functions and a set of templates and template tags.
You can familiarize yourself with the maps app on GitHub.
https://github.com/GeoNode/geonode/tree/master/geonode/layers
geonode.security¶
The geonode.security app is used to provide object level permissions within the GeoNode Django application. It is a custom Django authentication backend and is used to assign Generic, User and Group Permissions to Layers, Maps and other objects in the GeoNode system. Generic permissions are used to enable public anonymous or authenticated viewing and/or editing of your data layers and maps, and User and Group specific permissions are used to allow specific users or groups to access and edit your layers.
geonode.search¶
The geonode.search module provides the search API that is used to drive the GeoNode search pages. It is configured to index layers, maps, documents and profiles, but is extensible to allow you to use it to index your own model classes. This module is currently based on the Django ORM and as such has a limited set of search features, but the GeoNode development team is actively working on making it possible to use this module with more feature-rich search engines.
geonode.catalogue¶
The geonode.catalogue app provides a key set of metadata catalogue functions within GeoNode itself. GeoNode is configured to use an integrated version of the pycsw library to perform these functions, but can also be configured to use any OGC compliant CS-W implementation such as GeoNetwork or Deegree. The metadata app allows users to import and/or edit metadata for their layers, maps and documents, and it provides an OGC compliant search interface for use in federating with other systems.
geonode.geoserver¶
The geonode.geoserver module is used to interact with GeoServer from within GeoNode’s python code. It relies heavily on the gsconfig library which addresses GeoServer’s REST configuration API. Additionally, the geonode.geoserver.uploader module is used to interact with GeoServers Importer API for uploading and configuring layers.
geonode.people¶
The geonode.people module is used to model and store information about both GeoNode users and people outside of the system who are listed as Points of Contact for particular layers. It is the foundational module for GeoNode’s social features. It provides a set of forms for users to edit and manage their own profiles as well as to view and interact with the profiles of other users.
geoexplorer¶
GeoNode’s core GIS client functions are performed by GeoExplorer. The GeoExplorer app is in turn based on GeoExt, OpenLayers and ExtJS. It provides functionality for constructing maps, styling layers and connecting to remote services. GeoExplorer is the reference implementation of the OpenGeo Suite SDK which is based on GXP. GeoNode treats GeoExplorer as an external module that is used out of the box in GeoNode, but it is possible for you to create your own Suite SDK app and integrate it with GeoNode.
Static Site¶
The front end of GeoNode is composed of a set of core templates, specific templates for each module, cascading style sheets to style those pages and a set of javascript modules that provide the interactive functionality in the site.
Templates¶
GeoNode includes a basic set of core templates that use Django’s template inheritance system to provide a modular system for constructing the web pages in GeoNode’s interface. These core templates drive the overall page layout and things like the home page. You will start the process of customizing your GeoNode instance by overriding these templates, so you should familiarize yourself with their tructure and how they inherit from each other to drive the pages.
Additionally, most of the apps described above have their own set of templates that are used to drive the pages for each module. You may also want to override these templates for your own purposes and as such should familiarize yourself with a few of the key ones.
CSS¶
GeoNode’s css is based on Twitter’s Bootstrap Library which uses the lessc dynamic stylesheet language. GeoNode extends from the basic Boostrap style and you are able to create your own bootstrap based style to customize the look and feel of your own GeoNode instance. Sites like bootswatch.com also provide ready made styles that you can simply drop in to your project to change the style.
Javascript¶
The interactive functionality in GeoNode pages is provided by the jQuery javascript framework and a set of jQuery plugins. The core set of GeoNode javascript modules closely aligns with the apps described above, and there are also a few pieces of functionality provided as javascript modules that are used through out all of the apps. You are able to add your own jQuery code and/or plugins to perform interactive functionality in your own application.
Exercises¶
Shell and Utilities¶
ssh into your virtual machine or other instance
sudo to modify the sshd_config settings to verify disabling of dns resolution (UseDNS=no)
install a command line helper
$ sudo apt-get install bash-completion
exercise command completion
$ apt-get install <TAB><TAB>
activate/deactivate the virtualenv on your instance
$ source /var/lib/geonode/bin/activate $ deactivate
set the DJANGO_SETTINGS_MODULE env variable
$ export DJANGO_SETTINGS_MODULE=geonode.settings
install the httpie utility via pip
$ pip install httpie $ http http://localhost/geoserver/rest $ http -a admin http://localhost/geoserver/rest <type in password - geoserver>
Python¶
launch ipython and experiment
> x = "some text" > x.<TAB><TAB> > x.split.__doc__ > ?
execute a script with ipython and open the REPL
$ echo "twos = [ x*2 for x in range(5)]" > test.py $ ipython -i test.py > twos
Install GeoNode for Development¶
In order to install Geonode 2.0 in developing mode on Ubuntu 12.04 the following steps are required:
Note
For Windows: (Install GeoNode for Development (Windows))
Summary of the installation steps
- Retrieve latest apt-get list
- Install build tools and libraries
- Install dependencies (Python, Postgresql and Java) and supporting tools
- Add Nodejs PPA and other tools required for static development
- Set up a virtual environment (virtualenv)
- Clone geonode from github and install it in the virtual environment
- Run paver to get install geoserver and start the development servers
- Compile and Start the server
- Start Geonode instance
- To stop the server
- Next create a superuser for your django geonode
Note
The following steps have to be executed in your terminal. The steps have to be done as a root user, therefore don´t forget to type sudo in front!
Warning
Don’t forget to stop the GeoNode Production services if enabled
service apahe2 stop
service tomcat7 stop
If possible log as root user, open a terminal and
cd /home/geonode/dev
Retrieve latest apt-get list
$ sudo apt-get update
Install build tools and libraries
$ sudo apt-get install -y build-essential libxml2-dev libxslt1-dev libpq-dev zlib1g-dev
Install dependencies
Python native dependencies
$ sudo apt-get install -y python-dev python-imaging python-lxml python-pyproj python-shapely python-nose python-httplib2 python-pip python-software-properties
Install Python Virtual Environment
$ sudo pip install virtualenvwrapper
Postgresql
Note
The following steps must be executed only if you don’t have PostgreSQL and PostGIS already installed on your system (see Install GeoNode Application)
$ sudo apt-get install postgresql-9.3-postgis-2.1 postgresql-9.3-postgis-scripts
Change postgres UNIX password
$ sudo passwd -u postgres # change password expiry infromation $ sudo passwd postgres # change unix password for postgres
Create geonode role and database
$ su postgres $ createdb geonode_dev $ createdb geonode_dev-imports $ psql postgres=# postgres=# \password postgres postgres=# CREATE USER geonode_dev WITH PASSWORD 'geonode_dev'; # should be same as password in setting.py postgres=# GRANT ALL PRIVILEGES ON DATABASE "geonode_dev" to geonode_dev; postgres=# GRANT ALL PRIVILEGES ON DATABASE "geonode_dev-imports" to geonode_dev; postgres=# \q $ psql -d geonode_dev-imports -c 'CREATE EXTENSION postgis;' $ psql -d geonode_dev-imports -c 'GRANT ALL ON geometry_columns TO PUBLIC;' $ psql -d geonode_dev-imports -c 'GRANT ALL ON spatial_ref_sys TO PUBLIC;' $ exit
Edit PostgreSQL configuration file
sudo gedit /etc/postgresql/9.3/main/pg_hba.conf
Scroll to the bottom of the file and edit this line
# "local" is for Unix domain socket connections only local all all peer
As follows
# "local" is for Unix domain socket connections only local all all trust
Restart PostgreSQL to make the changes effective
sudo service postgresql restart
Java dependencies
Note
The following steps must be executed only if you don’t have a Java JDK or JRE already installed on your system (see Install GeoNode Application)
$ sudo apt-get install -y --force-yes openjdk-6-jdk --no-install-recommends
supporting tools
$ sudo apt-get install -y ant maven2 git gettext
Set up a virtual environment
Here is where Geonode will later be running.
Add the virtualenvwrapper to your new environement.
$ cd /home/geonode/dev $ export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python $ export WORKON_HOME=/home/geonode/dev/.venvs $ source /usr/local/bin/virtualenvwrapper.sh $ export PIP_DOWNLOAD_CACHE=$HOME/.pip-downloads
On Ubuntu, you can add the above settings to your .bashrc file and reload the settings running
$ echo export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python >> ~/.bashrc $ echo export WORKON_HOME=/home/geonode/dev/.venvs >> ~/.bashrc $ echo source /usr/local/bin/virtualenvwrapper.sh >> ~/.bashrc $ echo export PIP_DOWNLOAD_CACHE=$HOME/.pip-downloads >> ~/.bashrc $ source ~/.bashrc
Set up the local virtual environment for Geonode
$ mkvirtualenv geonode $ workon geonode # or $ source /home/geonode/dev/.venvs/geonode/bin/activate
This creates a new directory where you want your project to be and creates a new virtualenvironment
Get the code
To download the latest geonode version from github, the command clone is used
Note
If you are following the GeoNode training, skip the following command. You can find the cloned repository in /home/geonode/dev
$ git clone https://github.com/GeoNode/geonode.git
Add Nodejs PPA and other tools required for static development
This is required for static development
Note
If you are following GeoNode’s training, nodejs is already installed in the Virtual Machine skip the first three command and jump to cd geonode/geonode/static
$ sudo add-apt-repository -y ppa:chris-lea/node.js $ sudo apt-get update $ sudo apt-get install -y nodejs $ cd geonode/geonode/static $ npm install --save-dev # If the last command does not work, you can run it manually like this: $ npm install bower --save-dev $ npm install grunt-cli --save-dev $ npm install grunt-contrib-jshint --save-dev $ npm install grunt-contrib-less --save-dev $ npm install grunt-contrib-concat --save-dev $ npm install grunt-contrib-copy --save-dev $ npm install grunt-text-replace --save-dev $ npm install grunt-contrib-uglify --save-dev $ npm install grunt-contrib-cssmin --save-dev $ npm install grunt-contrib-watch --save-dev
Every time you want to update the static files after making changes to the sources, go to geonode/static and run ‘grunt production’.
Install GeoNode in the new active local virtualenv
$ cd /home/geonode/dev $ pip install pip --upgrade $ pip install -e geonode --use-mirrors $ cd geonode
If the install fails because of an error related to pyproj not being verified (happens on pip 1.5), use the following:
$ pip install -e geonode --use-mirrors --allow-external pyproj --allow-unverified pyproj
Create
local_settings.py
Add the
local_settings.py
to your GeoNode instllation$ cd /home/geonode/dev/geonode $ cp geonode/local_settings.py.sample geonode/local_settings.py $ gedit geonode/local_settings.py
Add the following lines to the
local_settings.py
... SITEURL = "http://localhost:8000/" DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'geonode_dev', 'USER': 'geonode_dev', 'PASSWORD': 'geonode_dev', }, # vector datastore for uploads 'datastore' : { 'ENGINE': 'django.contrib.gis.db.backends.postgis', #'ENGINE': '', # Empty ENGINE name disables 'NAME': 'geonode_dev-imports', 'USER' : 'geonode_dev', 'PASSWORD' : 'geonode_dev', 'HOST' : 'localhost', 'PORT' : '5432', } } # OGC (WMS/WFS/WCS) Server Settings OGC_SERVER = { 'default' : { 'BACKEND' : 'geonode.geoserver', 'LOCATION' : 'http://localhost:8080/geoserver/', 'PUBLIC_LOCATION' : 'http://localhost:8080/geoserver/', 'USER' : 'admin', 'PASSWORD' : 'geoserver', 'MAPFISH_PRINT_ENABLED' : True, 'PRINT_NG_ENABLED' : True, 'GEONODE_SECURITY_ENABLED' : True, 'GEOGIG_ENABLED' : False, 'WMST_ENABLED' : False, 'BACKEND_WRITE_ENABLED': True, 'WPS_ENABLED' : False, 'LOG_FILE': '%s/geoserver/data/logs/geoserver.log' % os.path.abspath(os.path.join(PROJECT_ROOT, os.pardir)), # Set to name of database in DATABASES dictionary to enable 'DATASTORE': 'datastore', } } CATALOGUE = { 'default': { # The underlying CSW implementation # default is pycsw in local mode (tied directly to GeoNode Django DB) 'ENGINE': 'geonode.catalogue.backends.pycsw_local', # pycsw in non-local mode # 'ENGINE': 'geonode.catalogue.backends.pycsw_http', # GeoNetwork opensource # 'ENGINE': 'geonode.catalogue.backends.geonetwork', # deegree and others # 'ENGINE': 'geonode.catalogue.backends.generic', # The FULLY QUALIFIED base url to the CSW instance for this GeoNode 'URL': '%scatalogue/csw' % SITEURL, # 'URL': 'http://localhost:8080/geonetwork/srv/en/csw', # 'URL': 'http://localhost:8080/deegree-csw-demo-3.0.4/services', # login credentials (for GeoNetwork) 'USER': 'admin', 'PASSWORD': 'admin', } } ...
Compile and Start the server for the first time
Align the DataBase structure
$ cd /home/geonode/dev/geonode $ python manage.py syncdb --noinput
Warning
If the start fails because of an import error related to osgeo, then please consult the Install GDAL for Development.
The last step is to compile GeoServer and setup
$ paver setup
Now we can start our geonode instance
Warning
Don’t forget to stop the GeoNode Production services if enabled
service apahe2 stop service tomcat7 stop
$ paver start
Visit the geonode site by typing http://localhost:8000 into your browser window.
If you are using a different IP address (e.g 1.1.1.1), then start paver using the command below.
$ paver start -b 1.1.1.1:8000
Warning
If the start fails because of an import error related to osgeo, then please consult the Install GDAL for Development.
To stop the server
type hold Ctrl c on your keyboard to stop the server
now type:
$ paver stop # to stop all django, geoserver services
Next create a superuser for your django geonode
Create a superuser so you can log on to your local geonode installation at http://localhost:8000
$ python manage.py createsuperuser
Start working on Geonode the next day after install¶
With every restart of your machine, you have to restart geonode as well. That means, you will not be able to open http://localhost:8000 directly after starting your machine new. In order to be able to use geonode now, you have to activate your virtualenvironment and to start the development servers.
Note
username is the name of your machine and personal folder!
Activate virtualenv
To activate your virtualenv you just need to type
$ workon geonode
or
$ source /home/geonode/dev/.venvs/geonode/bin/activateNote
Be careful with the path, it might not be the same for you!
Start the server
Warning
Don’t forget to stop the GeoNode Production services if enabled
service apahe2 stop service tomcat7 stop$ cd geonode $ paver start_geoserver $ paver start_djangoNow you are able to access http://localhost:8000 again.
Note
Remember that you have to do these steps each time you restart your machine!!
Hint
Now you’ve followed these installation instructions, geonode is running in development mode. This also means that you are using all the default settings of geonode. If you want to change them, e.g use Tomcat instead of Jetty, or Postgresql instead of sqlite3, you may follow the steps from the section Configure Manually in GeoNode (vlatest) installation on Ubuntu 14.04.
GeoNode debugging techniques¶
GeoNode can be difficult to debug as there are several different components involved:
- Browser - includes HTML/CSS issues, JavaScript, etc.
- Django - GeoNode HTML views and web APIs
- GeoServer - Core Wxx services and Platform REST APIs
When attempting to diagnose a specific problem, often the order of investigation mirrors the order above - that is, start with the client: Is this a bug in code running on the browser. If not, step to the next level: the Django responses to client requests. Often this is possible via the browser using the correct tools. Many requests require Django communications with GeoServer. This is the next stage of investigation if a specific bug does not appear to originate in Django or the client.
The following section covers techniques to help diagnose and debug errors.
Debugging GeoNode in the Browser¶
This section covers some techniques for debugging browser and Django related response bugs using the Firefox web browser extension named Firebug. The concepts covered apply to other browser’s tools but may vary in terminology.
Another Firefox extension worth noting is ‘jsonview’. This extension supports formatted viewing of JSON responses and integrates well with Firebug.
References:
Net Tab¶
The net tab allows viewing all of the network traffic from the browser. The subtabs (like the selected “Images” tab) allow filtering by the type of traffic.

Firebug Net Tab
In this screen-shot, the mouse hover displays the image content and the full URL requested. One can right-click to copy-paste the URL or view in a separate tab. This is useful for obtaining test URLs. The grayed out entries show that the resource was cached via conditional-get (the 304 not modified). Other very useful advanced information includes the size of the response and the loading indicator graphics on the right. At the bottom, note the total size and timing information.
Net Tab Exercises¶
- Go to layers/maps/search pages and look at the various requests. Note the XHR subtab. Look at the various request specific tabs: headers, params, etc.
- Use the ‘disable browser cache’ option and see how it affects page loads. Discuss advantages/challenges of caching.
DOM Tab¶
The DOM tab displays all of the top-level window objects. By drilling down, this can be a useful way to find out what’s going on in a page.

Firebug DOM Tab
In this example, the mouse is hovering over the app
object. Note the high level view of objects and their fields. The console tab allows interacting with the objects.
DOM Tab Exercises¶
- Drill down in the DOM tab.
- Use the console to interactively exercise jquery.
- Use the console to interact with the app/map or other page objects
Script Tab¶
The script tab allows viewing scripts and debugging.

The screen-shot displays a breakpoint set at line 3, the current code is stopped at line 8 and the mouse hover is displaying the value of the variable ‘class_list’. On the right, the ‘Watch’ tab displays the various variables and scopes and offers a drill down view similar to the DOM view. The stack tab displays the execution stack context outside the current frame.
Script Tab Exercises¶
- Step through some code
- Look at various features: variables, scopes, DOM drill-down
HTML Tab¶
The HTML tag allows viewing and drilling down into the DOM. This is an incredibly useful feature when doing CSS or HTML work.

The screen-shot displays a search result ‘article’ element highlighted with padding and margin in yellow and purple. The DOM structure is displayed on the left and the right panel displays the specific style rules while the computed tab displays the effective style rules. The layout tab displays rulers and property values while the DOM tab displays a debug/DOM-like view of the actual object’s properties.
HTML Tab Exercises¶
- Identify elements, look at the tabs on the right
- Change styles, add new rules and styles
- Edit existing HTML elements via the raw and tree-view
Debugging GeoExplorer¶
In case you want to debug the GeoExplorer behaviour in your browser with Firebug of Chromium Developer toolbar, you may do the following:
Install Boundless Suite:
$ git clone git://github.com/GeoNode/suite.git
$ cd suite
$ git submodule update --init --recursive
Run GeoExplorer in debug mode:
$ cd geoexplorer
$ ant debug
Check if GeoExplore is running at this url: http://localhost:9080
Edit the layers/templates/layers/layer_geoext_map.html
file and replace this
line:
{% include "geonode/geo_header.html" %}
with this one:
{% include "geonode/geo_header_debug.html" %}
Debugging GeoNode’s Python Components¶
Logging¶
References:
Logging is controlled by the contents of the logging data structure defined in
the settings.py
. The default settings distributed with GeoNode are
configured to only log errors. During development, it’s a good idea to override
the logging data structure with something a bit more verbose.
Output¶
In production, logging output will go into the apache error log. This is located
in /var/log/apache2/error.log
. During development, logging output will,
by default, go to standard error.
Configuring¶
- Ensure the ‘console’ handler is at the appropriate level. It will ignore log messages below the set level.
- Ensure the specific logger you’d like to use is set at the correct level.
- If attempting to log SQL, ensure
DEBUG=True
in yourlocal_settings.py
.
Debugging SQL¶
- To trace all SQL in django, configure the
django.db.backends
logger toDEBUG
- To examine a specific query object, you can use the
query
field: str(Layer.objects.all().query) - You can gather more information by using
django.db.connection.queries
. WhenDEBUG
is enabled, query SQL and timing information is stored in this list.
Hints¶
Don’t use print statements. They are easy to use in development mode but in production they will cause failure.
Take advantage of python. Instead of:
logging.info('some var ' + x + ' is not = ' + y)
Use:
logging.info('some var %s is not = %s', x, y)
Excercises:¶
- Enable logging of all SQL statements. Visit some pages and view the logging output.
- Using the python shell, use the
queries
object to demonstrate the results of specific queries.
PDB¶
Reference:
For the adventurous, pdb
allows for an interactive debugging session. This
is only possible when running in a shell via manage.py runserver
or
paver runserver
.
To set a breakpoint, insert the following code before the code to debug.
import pdb; pdb.set_strace()
When execution reaches this statement, the debugger will activate. The commands are noted in the link above. In addition to those debugger specific commands, general python statements are supported. For example, typing the name of a variable in scope will yield the value via string coersion. Typing “n” will execute the next line, “c” wil continue the execution of the program, “q” will quit.
Debugging GeoServer¶
Resources:
- http://docs.geoserver.org/stable/en/user/advanced/logging.html
- http://docs.geoserver.org/stable/en/user/production/troubleshooting.html
This section does not attempt to cover developer-level debugging in GeoServer as this is a much larger topic involving many more tools. The goal here is to provide ‘black-box’ techniques to help resolve and report problems.
Logging¶
GeoServer logging, while sometimes containing too much information, is the best way to start diagnosing an issue in GeoNode once the other. To create a proper error report for use in requesting support, providing any contextual logging information is critical.
When using a standard geoserver installation, the GeoServer logs are located at
/usr/share/geoserver/data/logs/geoserver.log
. The properties files that
control the varying rules are also located here.
Exercises¶
- Switch logging levels for various loggers.
- Look at the different logging profiles and discuss the loggers and levels.
- Learn how to read stacktraces, nested or otherwise.
Advanced Troubleshooting¶
JVM diagnostics and advanced troubleshooting techniques are covered in the GeoServer documents linked to above. When providing information for a bug report, these can be helpful but in-depth Java knowledge is required to fully comprehend the output from some of these tools.
Exercises¶
- Look at jstack output
Using Django to Help Debug¶
The gsconfig library provides a rich interface to interacting with GeoServer’s REST API. This allows high-level functions as well as viewing raw REST responses.
cat = Layer.objects.gs_catalog cat.get_layers() # list of gsconfig layer objects # OR, for a specific layer lyr = Layer.objects.get(id=1) lyr.resource # specfic gsconfig layer object lyr.resource.fetch() # get the XML from REST lyr.resource.dom # reference to the parsed XML from xml.etree.ElementTree import tostring tostring(lyr.resource.dom)
GeoNode APIs¶
GeoServer REST interface¶
This module is a walkthrough the GeoServer REST capabilities and APIs. Here also will be presented and deeply inspected several methods and frameworks to handle with REST APIs and functions.
What you will learn
In this section you will learn:
Introducing REST concepts¶
REST (REpresentational State Transfer) is a simple approach to web services strongly based on the basic HTTP infrastructure, such as URLs, HTTP methods and HTTP response codes.
The basic elements of a REST service are:
- Resource: each business entity is linked to a unique URL that represents it, and allows for its retrieval and eventual modification. In GeoServer such resources are layers, stores, styles and so on
- Connectedness: the various resources are linked to one another following significant relationships. For example, in GeoServer a store contains a list of feature types or coverages, a layer is linked to a style and a feature type/coverage, and so on (in other terms, the set of resources is supposed to be crawable just like a web site).
- Representation: each resource can be represented in one or more way. For example in GeoServer resources are normally represented as HTML, XML and JSON.
- Stateless-ness: each communication with the server is atomic and not related to the communications happened before or after it. Whatever state needs to be managed needs to be stored as a publicly accessible resource.
- HTTP methods reuse: each resource is manipulated via the common HTTP methods each having a common meaning, summarized by the following table
Method Description GET Retrieves the resource in the specified representation. Query parameters are often used to filter the contents of the returned resource, and sometimes to specify the desired representation format. HEAD Similar to GET, but instead of returning the full response it returns only the HTTP headers, which might contain information such as the last modification date of the resource PUT Stores the representation of a resource at a given URL. Used when the client already knows what the final URL of the resource will be POST Creates a new resource by either getting its contents in the request, or having some parameters to compute it. The main different is that the final URL of the created resource is not known to the client, and is returned by the server after creation via a redirect. It is also used to have the server perform certain actions that cannot be encoded as another method, for example, have it send a SMS (assuming creating a resource representing the SMS is not desirable) DELETE Destroys the specified resource. The above results in a web services protocols that is easy to understand, implement and connect to from various languages, and with good scalability characteristics.
The GeoServer rest interface is located at
http://localhost:8083/geoserver/rest
, by default a browser will show resources in HTML format allowing for a simple browsable interface to the GeoServer configuration.http://localhost:8083/geoserver/rest
![]()
Browsing the REST interface with HTML format
Follow the links into
workspaces
and thengeosolutions
and switch the format from.html
toxml
to see the XML representation:http://localhost:8083/geoserver/rest/workspaces/geosolutions.xml
![]()
The GeoSolutions workspace represented as XML
Using REST module¶
This section contains a number of examples which illustrate various uses of the REST data configuration api.
The GeoServer REST configuration module uses the REST principles to expose services allowing to edit the catalog, in particular to manage workspaces, stores, layers, styles and groups.
Note
The REST configuration extension has normally to be installed separately, it is not come out of the box.
The examples in this section use the cURL utility, which is a handy command line tool for executing HTTP requests and transferring files.
Open the Terminal and enter the following command:
curl -u admin:geoserver -v -XPOST -H "Content-type: text/xml" -d "<workspace><name>myworkspace</name></workspace>" http://localhost:8083/geoserver/rest/workspacesthe response should contains the following:
![]()
Create a new workspace via REST
Go to the
Workspaces
section via Web interface to show the new workspace created![]()
GET request to abtain new workspace details
Get the new created workspace details entering the following:
curl -u admin:geoserver -XGET -H "Accept: text/xml" http://localhost:8083/geoserver/rest/workspaces/myworkspace![]()
GET request to obtain new workspace details
Publish the shapefile
pointlands
using themyworkspace
workspace entering the following
Linux:
curl -u admin:geoserver -H "Content-type: application/zip" -T /.../pointlands.zip http://localhost:8083/geoserver/rest/workspaces/myworkspace/datastores/pointlands/file.shpWindows:
curl -u admin:geoserver -H "Content-type: application/zip" -T /.../pointlands.zip http://localhost:8083/geoserver/rest/workspaces/myworkspace/datastores/pointlands/file.shpGo to the Layer Preview to show the layers in a OpenLayers Map.
![]()
Showing the new layer created
![]()
The new layers created
Note
If you previously followed the security portion of the workshop the layer won’t be accessible because the administrator does not have the required roles. Go back in the service security section and remove the rule limiting the GetMap requests.
Retrieves the created data store as XML entering the following:
curl -u admin:geoserver -XGET http://localhost:8083/geoserver/rest/workspaces/myworkspace/datastores/pointlands.xml<dataStore> <name>pointlands</name> <type>Shapefile</type> <enabled>true</enabled> <workspace> <name>myworkspace</name> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost:8083/geoserver/rest/workspaces/myworkspace.xml" type="application/xml"/> </workspace> <connectionParameters> <entry key="url">file:${TRAINING_ROOT}/geoserver_data/data/myworkspace/pointlands/</entry> <entry key="namespace">http://myworkspace</entry> </connectionParameters> <__default>false</__default> <featureTypes> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost:8083/geoserver/rest/workspaces/myworkspace/datastores/pointlands/featuretypes.xml" type="application/xml"/> </featureTypes> </dataStore>Note
By default when a shapefile is uploaded a feature type resource and the associated layer are automatically created.
Retrieve the layer as XML entering the following:
curl -u admin:geoserver -XGET http://localhost:8083/geoserver/rest/layers/myworkspace:pointlands.xml<layer> <name>pointlands</name> <type>VECTOR</type> <defaultStyle> <name>point</name> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost:8083/geoserver/rest/styles/point.xml" type="application/xml"/> </defaultStyle> <resource class="featureType"> <name>pointlands</name> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost:8083/geoserver/rest/workspaces/myworkspace/datastores/pointlands/featuretypes/pointlands.xml" type="application/xml"/> </resource> <attribution> <logoWidth>0</logoWidth> <logoHeight>0</logoHeight> </attribution> </layer>Note
When the layer is created a default style named
point
is assigned to it.Create a new style named
landmarks
with the following SLD (using the GeoServer Admin UI):<?xml version="1.0" encoding="ISO-8859-1"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <!-- a Named Layer is the basic building block of an SLD document --> <NamedLayer> <Name>default_point</Name> <UserStyle> <!-- Styles can have names, titles and abstracts --> <Title>Default Point</Title> <Abstract>A sample style that draws a point</Abstract> <!-- FeatureTypeStyles describe how to render different features --> <!-- A FeatureTypeStyle for rendering points --> <FeatureTypeStyle> <Rule> <Name>rule1</Name> <Title>Red Square</Title> <Abstract>A 6 pixel square with a red fill and no stroke</Abstract> <PointSymbolizer> <Graphic> <Mark> <WellKnownName>triangle</WellKnownName> <Stroke> <CssParameter name="stroke">#66FF66</CssParameter> </Stroke> <Fill> <CssParameter name="fill">#66FF66</CssParameter> </Fill> </Mark> <Size>10</Size> </Graphic> </PointSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>Apply the existing
landmarks
style to the layer createdmyworkspace:pointlands
(this operation does not overwrite the entire layer definition, updates it instead):curl -u admin:geoserver -XPUT -H "Content-type: text/xml" -d "<layer><defaultStyle><name>landmarks</name></defaultStyle><enabled>true</enabled></layer>" http://localhost:8083/geoserver/rest/layers/myworkspace:pointlandsGo to the Layer Preview to show the layers with the new
landmarks
style.![]()
Viewing the layers with the new created style
landmarks
REST configuration examples¶
This section contains a number of examples which illustrate various uses of the REST configuration API. The examples are grouped by the language or environment used.
cURL¶
The examples in this section use cURL, a command line tool for executing HTTP requests and transferring files, to generate requests to GeoServer’s REST interface. Although the examples are based on cURL, they could be adapted for any HTTP-capable tool or library. Please be aware, that cURL acts not entirely the same as a web-browser. In contrast to Mozilla Firefox or Google Chrome cURL will not escape special characters in your request-string automatically. To make sure, that your requests can be processed correctly, make sure, that characters like paranthesis, commas and the like are escaped before sending them via cURL. If you use libcurl in PHP 5.5 or newer you can prepare the url-string using the function curl_escape. In older versions of PHP hmlspecialchars should do the job also.
Adding a new workspace¶ The following creates a new workspace named “acme” with a POST request:
Note
Each code block below contains a single command that may be extended over multiple lines.
curl -v -u admin:geoserver -XPOST -H "Content-type: text/xml" -d "<workspace><name>acme</name></workspace>" http://localhost/geoserver/rest/workspaces
If executed correctly, the response should contain the following:
< HTTP/1.1 201 Created ... < Location: http://localhost/geoserver/rest/workspaces/acmeNote the
Location
response header, which specifies the location (URI) of the newly created workspace.The workspace information can be retrieved as XML with a GET request:
curl -v -u admin:geoserver -XGET -H "Accept: text/xml" http://localhost/geoserver/rest/workspaces/acme
The response should look like this:
<workspace> <name>acme</name> <dataStores> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/workspaces/acme/datastores.xml" type="application/xml"/> </dataStores> <coverageStores> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/workspaces/acme/coveragestores.xml" type="application/xml"/> </coverageStores> <wmsStores> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/workspaces/acme/wmsstores.xml" type="application/xml"/> </wmsStores> </workspace>This shows that the workspace can contain “
dataStores
” (for vector data), “coverageStores
” (for raster data), and “wmsStores
” (for cascaded WMS servers).Note
The
Accept
header is optional. The following request omits theAccept
header, but will return the same response as above.curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/workspaces/acme.xml
Uploading a shapefile¶ In this example a new store will be created by uploading a shapefile.
The following request uploads a zipped shapefile named
roads.zip
and creates a new store namedroads
.Note
Each code block below contains a single command that may be extended over multiple lines.
curl -v -u admin:geoserver -XPUT -H "Content-type: application/zip" --data-binary @roads.zip http://localhost/geoserver/rest/workspaces/acme/datastores/roads/file.shp
The
roads
identifier in the URI refers to the name of the store to be created. To create a store namedsomethingelse
, the URI would behttp://localhost/geoserver/rest/workspaces/acme/datastores/somethingelse/file.shp
If executed correctly, the response should contain the following:
< HTTP/1.1 201 CreatedThe store information can be retrieved as XML with a GET request:
curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/workspaces/acme/datastores/roads.xml
The response should look like this:
<dataStore> <name>roads</name> <type>Shapefile</type> <enabled>true</enabled> <workspace> <name>acme</name> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/workspaces/acme.xml" type="application/xml"/> </workspace> <connectionParameters> <entry key="url">file:/C:/path/to/data_dir/data/acme/roads/</entry> <entry key="namespace">http://acme</entry> </connectionParameters> <__default>false</__default> <featureTypes> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/workspaces/acme/datastores/roads/featuretypes.xml" type="application/xml"/> </featureTypes> </dataStore>By default when a shapefile is uploaded, a feature type is automatically created. The feature type information can be retrieved as XML with a GET request:
curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/workspaces/acme/datastores/roads/featuretypes/tiger_roads.xml
If executed correctly, the response will be:
<featureType> <name>roads</name> <nativeName>roads</nativeName> <namespace> <name>acme</name> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/namespaces/acme.xml" type="application/xml"/> </namespace> ... </featureType>The remainder of the response consists of layer metadata and configuration information.
Note
Notice that the name of the Layer (and of the FeatureType) corresponds to the physical name of the ShapeFile contained into the archive.
Adding an existing shapefile¶ In the previous example a shapefile was uploaded directly to GeoServer by sending a zip file in the body of a PUT request. This example shows how to publish a shapefile that already exists on the server.
Consider a directory on the server
/data/shapefiles
that contains the shapefilerivers.shp
. The following adds a new store for the shapefile:Note
In order to execute the exercise, create a folder
shapefiles
somewhere on the server and extract there theshapefiles.zip
.curl -v -u admin:geoserver -XPUT -H "Content-type: text/plain" -d "file:///home/geonode/data/shapefiles/rivers.shp" http://localhost/geoserver/rest/workspaces/acme/datastores/rivers/external.shp
The
external.shp
part of the request URI indicates that the file is coming from outside the catalog.If executed correctly, the response should contain the following:
< HTTP/1.1 201 CreatedThe shapefile will be added to the existing store and published as a layer.
To verify the contents of the store, execute a GET request. Since the XML response only provides details about the store itself without showing its contents, execute a GET request for HTML:
curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/workspaces/acme/datastores/rivers.html
Adding a directory of existing shapefiles¶ This example shows how to load and create a store that contains a number of shapefiles, all with a single operation. This example is very similar to the example above of adding a single shapefile.
Consider a directory on the server
/data/shapefiles
that contains multiple shapefiles. The following adds a new store for the directory.Note
In order to execute the exercise, create a folder
shapefiles
somewhere on the server and extract there theshapefiles.zip
.curl -v -u admin:geoserver -XPUT -H "Content-type: text/plain" -d "file:///home/geonode/data/shapefiles/" "http://localhost/geoserver/rest/workspaces/acme/datastores/shapefiles/external.shp?configure=all"
Note the
configure=all
query string parameter, which sets each shapefile in the directory to be loaded and published.If executed correctly, the response should contain the following:
< HTTP/1.1 201 CreatedTo verify the contents of the store, execute a GET request. Since the XML response only provides details about the store itself without showing its contents, execute a GET request for HTML:
curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/workspaces/acme/datastores/shapefiles.html
Creating a layer style¶ This example will create a new style on the server and populate it the contents of a local SLD file.
The following creates a new style named
roads_style
:Note
Each code block below contains a single command that may be extended over multiple lines.
curl -v -u admin:geoserver -XPOST -H "Content-type: text/xml" -d "<style><name>roads_style</name><filename>roads.sld</filename></style>" http://localhost/geoserver/rest/styles
If executed correctly, the response should contain the following:
< HTTP/1.1 201 CreatedThis request uploads a file called
roads.sld
file and populates theroads_style
with its contents:curl -v -u admin:geoserver -XPUT -H "Content-type: application/vnd.ogc.sld+xml" -d @roads.sld http://localhost/geoserver/rest/styles/roads_style
If executed correctly, the response should contain the following:
< HTTP/1.1 200 OKThe SLD itself can be downloaded through a a GET request:
curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/styles/roads_style.sld
Changing a layer style¶ This example will alter a layer style. Prior to making any changes, it is helpful to view the existing configuration for a given layer.
Note
Each code block below contains a single command that may be extended over multiple lines.
The following retrieves the “acme:roads” layer information as XML:
curl -v -u admin:geoserver -XGET "http://localhost/geoserver/rest/layers/acme:tiger_roads.xml"
The response in this case would be:
<layer> <name>tiger_roads</name> <type>VECTOR</type> <defaultStyle> <name>line</name> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/styles/line.xml" type="application/xml"/> </defaultStyle> <resource class="featureType"> <name>tiger_roads</name> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/workspaces/acme/datastores/roads/featuretypes/tiger_roads.xml" type="application/xml"/> </resource> <attribution> <logoWidth>0</logoWidth> <logoHeight>0</logoHeight> </attribution> </layer>When the layer is created, GeoServer assigns a default style to the layer that matches the geometry of the layer. In this case a style named
line
is assigned to the layer. This style can viewed with a WMS request:http://localhost/geoserver/wms/reflect?layers=acme:tiger_roadsIn this next example a new style will be created called
roads_style
and assigned to the “acme:roads” layer:curl -v -u admin:geoserver -XPUT -H "Content-type: text/xml" -d "<layer><defaultStyle><name>roads_style</name></defaultStyle></layer>" http://localhost/geoserver/rest/layers/acme:tiger_roads
If executed correctly, the response should contain the following:
< HTTP/1.1 200 OKThe new style can be viewed with the same WMS request as above:
http://localhost/geoserver/wms/reflect?layers=acme:tiger_roadsNote that if you want to upload the style in a workspace (ie, not making it a global style), and then assign this style to a layer in that workspace, you need first to create the style in the given workspace:
curl -u admin:geoserver -XPOST -H 'Content-type: text/xml' -d '<style><name>roads_style</name><filename>roads.sld</filename></style>' http://localhost/geoserver/rest/workspaces/acme/stylesUpload the file within the workspace:
curl -u admin:geoserver -XPUT -H 'Content-type: application/vnd.ogc.sld+xml' -d @roads.sld http://localhost/geoserver/rest/workspaces/acme/styles/roads_styleAnd finally apply that style to the layer. Note the use of the
<workspace>
tag in the XML:curl -u admin:geoserver -XPUT -H 'Content-type: text/xml' -d '<layer><defaultStyle><name>roads_style</name><workspace>acme</workspace></defaultStyle></layer>' http://localhost/geoserver/rest/layers/acme:tiger_roadsAdding a PostGIS database¶ In this example a PostGIS database named
nyc
will be added as a new store.Warning
This section assumes that a PostGIS database named
nyc
is present on the local system and is accessible by the userbob
.Note
In order to create and setup the database locally, follow the instructions at Create and Prepare the nyc Example DataBase
Create a new text file and add the following content to it. This will represent the new store. Save the file as
nycDataStore.xml
.<dataStore> <name>nyc</name> <connectionParameters> <host>localhost</host> <port>5432</port> <database>nyc</database> <user>bob</user> <passwd>postgres</passwd> <dbtype>postgis</dbtype> </connectionParameters> </dataStore>The following will add the new PostGIS store to the GeoServer catalog:
Note
Each code block below contains a single command that may be extended over multiple lines.
curl -v -u admin:geoserver -XPOST -T nycDataStore.xml -H "Content-type: text/xml" http://localhost/geoserver/rest/workspaces/acme/datastores
If executed correctly, the response should contain the following:
< HTTP/1.1 200 OKThe store information can be retrieved as XML with a GET request:
curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/workspaces/acme/datastores/nyc.xml
The response should look like the following:
<dataStore> <name>nyc</name> <type>PostGIS</type> <enabled>true</enabled> <workspace> <name>acme</name> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/workspaces/acme.xml" type="application/xml"/> </workspace> <connectionParameters> <entry key="port">5432</entry> <entry key="dbtype">postgis</entry> <entry key="host">localhost</entry> <entry key="user">bob</entry> <entry key="database">nyc</entry> <entry key="namespace">http://acme</entry> </connectionParameters> <__default>false</__default> <featureTypes> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/workspaces/acme/datastores/nyc/featuretypes.xml" type="application/xml"/> </featureTypes> </dataStore>Adding a PostGIS table¶ In this example a table from the PostGIS database created in the previous example will be added as a featuretypes.
Warning
This example assumes the table has already been created and the
tiger_roads
Layer deleted in case you have executed the previous steps.The following adds the table
tiger_roads
as a new feature type:Note
Each code block below contains a single command that may be extended over multiple lines.
curl -v -u admin:geoserver -XPOST -H "Content-type: text/xml" -d "<featureType><name>tiger_roads</name></featureType>" http://localhost/geoserver/rest/workspaces/acme/datastores/nyc/featuretypes
The featuretype information can be retrieved as XML with a GET request:
curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/workspaces/acme/datastores/nyc/featuretypes/tiger_roads.xml
This layer can viewed with a WMS GetMap request:
http://localhost/geoserver/wms/reflect?layers=acme:tiger_roadsCreating a PostGIS table¶ In the previous example, a new feature type was added based on a PostGIS table that already existed in the database. The following example will not only create a new feature type in GeoServer, but will also create the PostGIS table itself.
Create a new text file and add the following content to it. This will represent the definition of the new feature type and table. Save the file as
annotations.xml
.<featureType> <name>annotations</name> <nativeName>annotations</nativeName> <title>Annotations</title> <srs>EPSG:4326</srs> <attributes> <attribute> <name>the_geom</name> <binding>com.vividsolutions.jts.geom.Point</binding> </attribute> <attribute> <name>description</name> <binding>java.lang.String</binding> </attribute> <attribute> <name>timestamp</name> <binding>java.util.Date</binding> </attribute> </attributes> </featureType>This request will perform the feature type creation and add the new table:
Note
Each code block below contains a single command that may be extended over multiple lines.
curl -v -u admin:geoserver -XPOST -T annotations.xml -H "Content-type: text/xml" http://localhost/geoserver/rest/workspaces/acme/datastores/nyc/featuretypes
The result is a new, empty table named “annotations” in the “nyc” database, fully configured as a feature type.
The featuretype information can be retrieved as XML with a GET request:
curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/workspaces/acme/datastores/nyc/featuretypes/annotations.xml
Creating a layer group¶ Warning
This example assumes the tables has already been created and the
tiger_roads
,poly_landmarks
,poi
,giant_polygon
Layers have been created.$ curl -v -u admin:geoserver -XPOST -H "Content-type: text/xml" -d "<featureType><name>giant_polygon</name></featureType>" http://localhost/geoserver/rest/workspaces/acme/datastores/nyc/featuretypes $ curl -v -u admin:geoserver -XPOST -H "Content-type: text/xml" -d "<featureType><name>poi</name></featureType>" http://localhost/geoserver/rest/workspaces/acme/datastores/nyc/featuretypes $ curl -v -u admin:geoserver -XPOST -H "Content-type: text/xml" -d "<featureType><name>poly_landmarks</name></featureType>" http://localhost/geoserver/rest/workspaces/acme/datastores/nyc/featuretypesIn this example a layer group will be created, based on layers that already exist on the server.
Create a new text file and add the following content to it. This file will represent the definition of the new layer group. Save the file as
nycLayerGroup.xml
.<layerGroup> <name>nyc</name> <layers> <layer>poi</layer> <layer>poly_landmarks</layer> <layer>tiger_roads</layer> </layers> <styles> <style>point</style> <style>polygon</style> <style>roads_style</style> </styles> </layerGroup>The following request creates the new layer group:
Note
Each code block below contains a single command that may be extended over multiple lines.
curl -v -u admin:geoserver -XPOST -d @nycLayerGroup.xml -H "Content-type: text/xml" http://localhost/geoserver/rest/layergroups
Note
The argument
-d@filename.xml
in this example is used to send a file as the body of an HTTP request with a POST method. The argument-T filename.xml
used in the previous example was used to send a file as the body of an HTTP request with a PUT method.This layer group can be viewed with a WMS GetMap request:
http://localhost/geoserver/wms/reflect?layers=nyc&format=openlayersRetrieving component versions¶ This example shows how to retrieve the versions of the main components: GeoServer, GeoTools, and GeoWebCache:
Note
The code block below contains a single command that is extended over multiple lines.
curl -v -u admin:geoserver -XGET -H "Accept: text/xml" http://localhost/geoserver/rest/about/version.xml
The response will look something like this:
<about> <resource name="GeoServer"> <Build-Timestamp>04-Aug-2015 11:00</Build-Timestamp> <Git-Revision>bca94d09e2e18839814a4b663ba8b0fca2130e47</Git-Revision> <Version>2.7-SNAPSHOT</Version> </resource> <resource name="GeoTools"> <Build-Timestamp>29-Jul-2015 10:13</Build-Timestamp> <Git-Revision>f50be97a039cd06d43a87ec3cc101626f0ac9fd2</Git-Revision> <Version>13-SNAPSHOT</Version> </resource> <resource name="GeoWebCache"> <Git-Revision>f6e0d39c29c2317d2839c52a84676935e5b046cf/f6e0d39c29c2317d2839c52a84676935e5b046cf</Git-Revision> <Version>1.7-SNAPSHOT</Version> </resource> </about>Retrieving manifests¶ This collection of examples shows how to retrieve the full manifest and subsets of the manifest as known to the ClassLoader.
Note
The code block below contains a single command that is extended over multiple lines.
curl -v -u admin:geoserver -XGET -H "Accept: text/xml" http://localhost/geoserver/rest/about/manifest.xml
The result will be a very long list of manifest information. While this can be useful, it is often desirable to filter this list.
Filtering over resource name¶ It is possible to filter over resource names using regular expressions. This example will retrieve only resources where the
name
attribute matchesgwc-.*
:Note
The code block below contains a single command that is extended over multiple lines.
curl -v -u admin:geoserver -XGET -H "Accept: text/xml" http://localhost/geoserver/rest/about/manifest.xml?manifest=gwc-.*
The result will look something like this (edited for brevity):
<about> <resource name="gwc-2.3.0"> ... </resource> <resource name="gwc-core-1.4.0"> ... </resource> <resource name="gwc-diskquota-core-1.4.0"> ... </resource> <resource name="gwc-diskquota-jdbc-1.4.0"> ... </resource> <resource name="gwc-georss-1.4.0"> ... </resource> <resource name="gwc-gmaps-1.4.0"> ... </resource> <resource name="gwc-kml-1.4.0"> ... </resource> <resource name="gwc-rest-1.4.0"> ... </resource> <resource name="gwc-tms-1.4.0"> ... </resource> <resource name="gwc-ve-1.4.0"> ... </resource> <resource name="gwc-wms-1.4.0"> ... </resource> <resource name="gwc-wmts-1.4.0"> ... </resource> </about>Filtering over resource properties¶ Filtering is also available over resulting resource properties. This example will retrieve only resources with a property equal to
GeoServerModule
.Note
The code blocks below contain a single command that is extended over multiple lines.
curl -u admin:geoserver -XGET -H "Accept: text/xml" http://localhost/geoserver/rest/about/manifest.xml?key=GeoServerModule
The result will look something like this (edited for brevity):
<about> <resource name="control-flow-2.3.0"> <GeoServerModule>extension</GeoServerModule> ... </resource> ... <resource name="wms-2.3.0"> <GeoServerModule>core</GeoServerModule> ... </resource> </about>It is also possible to filter against both property and value. To retrieve only resources where a property named
GeoServerModule
has a value equal toextension
, append the above request with&value=extension
:curl -u admin:geoserver -XGET -H "Accept: text/xml" http://localhost/geoserver/rest/about/manifest.xml?key=GeoServerModule&value=extension
Uploading and modifying a image mosaic¶ The following command uploads a
polyphemus.zip
file containing the definition of a mosaic (along with at least one granule of the mosaic to initialize the resolutions, overviews and the like) and will configure all the coverages in it as new layers.Note
The code blocks below contain a single command that is extended over multiple lines.
curl -u admin:geoserver -XPUT -H "Content-type:application/zip" --data-binary @polyphemus.zip http://localhost/geoserver/rest/workspaces/topp/coveragestores/polyphemus/file.imagemosaic
The following instead instructs the mosaic to harvest (or re-harvest) a single file into the mosaic, collecting its properties and updating the mosaic index:
curl -v -u admin:geoserver -XPOST -H "Content-type: text/plain" -d "file:///path/to/the/file/polyphemus_20130302.nc" "http://localhost/geoserver/rest/workspaces/topp/coveragestores/poly-incremental/external.imagemosaic"
Harvesting can also be directed towards a whole directory, as follows:
curl -v -u admin:geoserver -XPOST -H "Content-type: text/plain" -d "file:///path/to/the/mosaic/folder" "http://localhost/geoserver/rest/workspaces/topp/coveragestores/poly-incremental/external.imagemosaic"
The image mosaic index structure can be retrieved using something like:
curl -v -u admin:geoserver -XGET "http://localhost/geoserver/rest/workspaces/topp/coveragestores/polyphemus-v1/coverages/NO2/index.xml"
which will result in the following:
<Schema> <attributes> <Attribute> <name>the_geom</name> <minOccurs>0</minOccurs> <maxOccurs>1</maxOccurs> <nillable>true</nillable> <binding>com.vividsolutions.jts.geom.Polygon</binding> </Attribute> <Attribute> <name>location</name> <minOccurs>0</minOccurs> <maxOccurs>1</maxOccurs> <nillable>true</nillable> <binding>java.lang.String</binding> </Attribute> <Attribute> <name>imageindex</name> <minOccurs>0</minOccurs> <maxOccurs>1</maxOccurs> <nillable>true</nillable> <binding>java.lang.Integer</binding> </Attribute> <Attribute> <name>time</name> <minOccurs>0</minOccurs> <maxOccurs>1</maxOccurs> <nillable>true</nillable> <binding>java.sql.Timestamp</binding> </Attribute> <Attribute> <name>elevation</name> <minOccurs>0</minOccurs> <maxOccurs>1</maxOccurs> <nillable>true</nillable> <binding>java.lang.Double</binding> </Attribute> <Attribute> <name>fileDate</name> <minOccurs>0</minOccurs> <maxOccurs>1</maxOccurs> <nillable>true</nillable> <binding>java.sql.Timestamp</binding> </Attribute> <Attribute> <name>updated</name> <minOccurs>0</minOccurs> <maxOccurs>1</maxOccurs> <nillable>true</nillable> <binding>java.sql.Timestamp</binding> </Attribute> </attributes> <atom:link xmlns:atom="http://www.w3.org/2005/Atom" rel="alternate" href="http://localhost/geoserver/rest/workspaces/topp/coveragestores/polyphemus-v1/coverages/NO2/index/granules.xml" type="application/xml"/> </Schema>Listing the existing granules can be performed as follows:
curl -v -u admin:geoserver -XGET "http://localhost/geoserver/rest/workspaces/topp/coveragestores/polyphemus-v1/coverages/NO2/index/granules.xml?limit=2"
This will result in a GML description of the granules, as follows:
<?xml version="1.0" encoding="UTF-8"?> <wfs:FeatureCollection xmlns:gf="http://www.geoserver.org/rest/granules" xmlns:ogc="http://www.opengis.net/ogc" xmlns:wfs="http://www.opengis.net/wfs" xmlns:gml="http://www.opengis.net/gml"> <gml:boundedBy> <gml:Box srsName="http://www.opengis.net/gml/srs/epsg.xml#4326"> <gml:coord> <gml:X>5.0</gml:X> <gml:Y>45.0</gml:Y> </gml:coord> <gml:coord> <gml:X>14.875</gml:X> <gml:Y>50.9375</gml:Y> </gml:coord> </gml:Box> </gml:boundedBy> <gml:featureMember> <gf:NO2 fid="NO2.1"> <gf:the_geom> <gml:Polygon> <gml:outerBoundaryIs> <gml:LinearRing> <gml:coordinates>5.0,45.0 5.0,50.9375 14.875,50.9375 14.875,45.0 5.0,45.0</gml:coordinates> </gml:LinearRing> </gml:outerBoundaryIs> </gml:Polygon> </gf:the_geom> <gf:location>polyphemus_20130301.nc</gf:location> <gf:imageindex>336</gf:imageindex> <gf:time>2013-03-01T00:00:00Z</gf:time> <gf:elevation>10.0</gf:elevation> <gf:fileDate>2013-03-01T00:00:00Z</gf:fileDate> <gf:updated>2013-04-11T10:54:31Z</gf:updated> </gf:NO2> </gml:featureMember> <gml:featureMember> <gf:NO2 fid="NO2.2"> <gf:the_geom> <gml:Polygon> <gml:outerBoundaryIs> <gml:LinearRing> <gml:coordinates>5.0,45.0 5.0,50.9375 14.875,50.9375 14.875,45.0 5.0,45.0</gml:coordinates> </gml:LinearRing> </gml:outerBoundaryIs> </gml:Polygon> </gf:the_geom> <gf:location>polyphemus_20130301.nc</gf:location> <gf:imageindex>337</gf:imageindex> <gf:time>2013-03-01T00:00:00Z</gf:time> <gf:elevation>35.0</gf:elevation> <gf:fileDate>2013-03-01T00:00:00Z</gf:fileDate> <gf:updated>2013-04-11T10:54:31Z</gf:updated> </gf:NO2> </gml:featureMember> </wfs:FeatureCollection>Removing all the granules originating from a particular file (a NetCDF file can contain many) can be done as follows:
curl -v -u admin:geoserver -XDELETE "http://localhost/geoserver/rest/workspaces/topp/coveragestores/polyphemus-v1/coverages/NO2/index/granules.xml?filter=location='polyphemus_20130301.nc'"
Creating an empty mosaic and harvest granules¶ The next command uploads an
empty.zip
file. This archive contains the definition of an empty mosaic (no granules in this case) through the following files:datastore.properties (the postgis datastore connection params) indexer.xml (The mosaic Indexer, note the CanBeEmpty=true parameter) polyphemus-test.xml (The auxiliary file used by the NetCDF reader to parse schemas and tables)Note
Make sure to update the datastore.properties file with your connection params and refresh the zip when done, before uploading it.
Note
The code blocks below contain a single command that is extended over multiple lines.
Note
The configure=none parameter allows for future configuration after harvesting
curl -u admin:geoserver -XPUT -H "Content-type:application/zip" --data-binary @empty.zip http://localhost/geoserver/rest/workspaces/topp/coveragestores/empty/file.imagemosaic?configure=none
The following instead instructs the mosaic to harvest a single
polyphemus_20120401.nc
file into the mosaic, collecting its properties and updating the mosaic index:curl -v -u admin:geoserver -XPOST -H "Content-type: text/plain" -d "file:///path/to/the/file/polyphemus_20120401.nc" "http://localhost/geoserver/rest/workspaces/topp/coveragestores/empty/external.imagemosaic"
Once done you can get the list of coverages/granules available on that store.
curl -v -u admin:geoserver -XGET "http://localhost/geoserver/rest/workspaces/topp/coveragestores/empty/coverages.xml?list=all"
which will result in the following:
<list> <coverageName>NO2</coverageName> <coverageName>O3</coverageName> </list>Next step is configuring ONCE for coverage (as an instance NO2), an available coverage.
curl -v -u admin:geoserver -XPOST -H "Content-type: text/xm" -d @"/path/to/coverageconfig.xml" "http://localhost/geoserver/rest/workspaces/topp/coveragestores/empty/coverages"
Where coverageconfig.xml may look like this
<coverage> <name>NO2</name> </coverage>Note
When specifying only the coverage name, the coverage will be automatically configured
Master Password Change¶ The master password can be fetched wit a GET request.
curl -v -u admin:geoserver -XGET http://localhost/geoserver/rest/security/masterpw.xml
A generated master password may be -“}3a^Kh. Next step is creating an XML file.
File changes.xml
<masterPassword> <oldMasterPassword>-"}3a^Kh</oldMasterPassword> <newMasterPassword>geoserver1</newMasterPassword> </masterPassword>Changing the master password using the file:
curl -v -u admin:geoserver -XPUT -H "Content-type: text/xml" -d @change.xml http://localhost/geoserver/rest/security/masterpw.xml
PHP¶
The examples in this section use the server-side scripting language PHP, a popular language for dynamic webpages. PHP has cURL functions , as well as XML functions, making it a convenient method for performing batch processing through the Geoserver REST interface. The following scripts execute single requests, but can be easily modified with looping structures to perform batch processing.
Note
In order to execute the examples just copy the script content into a
test.php
file and execute the following command:$ php test.php
POST with PHP/cURL¶ The following script attempts to add a new workspace.
<?php // Open log file $logfh = fopen("GeoserverPHP.log", 'w') or die("can't open log file"); // Initiate cURL session $service = "http://localhost:8080/geoserver/"; // replace with your URL $request = "rest/workspaces"; // to add a new workspace $url = $service . $request; $ch = curl_init($url); // Optional settings for debugging curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); //option to return string curl_setopt($ch, CURLOPT_VERBOSE, true); curl_setopt($ch, CURLOPT_STDERR, $logfh); // logs curl messages //Required POST request settings curl_setopt($ch, CURLOPT_POST, True); $passwordStr = "admin:geoserver"; // replace with your username:password curl_setopt($ch, CURLOPT_USERPWD, $passwordStr); //POST data curl_setopt($ch, CURLOPT_HTTPHEADER, array("Content-type: application/xml")); $xmlStr = "<workspace><name>test_ws</name></workspace>"; curl_setopt($ch, CURLOPT_POSTFIELDS, $xmlStr); //POST return code $successCode = 201; $buffer = curl_exec($ch); // Execute the curl request // Check for errors and process results $info = curl_getinfo($ch); if ($info['http_code'] != $successCode) { $msgStr = "# Unsuccessful cURL request to "; $msgStr .= $url." [". $info['http_code']. "]\n"; fwrite($logfh, $msgStr); } else { $msgStr = "# Successful cURL request to ".$url."\n"; fwrite($logfh, $msgStr); } fwrite($logfh, $buffer."\n"); curl_close($ch); // free resources if curl handle will not be reused fclose($logfh); // close logfile ?>The logfile should look something like:
* About to connect() to www.example.com port 80 (#0) * Trying 123.456.78.90... * connected * Connected to www.example.com (123.456.78.90) port 80 (#0) * Server auth using Basic with user 'admin' > POST /geoserver/rest/workspaces HTTP/1.1 Authorization: Basic sDsdfjkLDFOIedlsdkfj Host: www.example.com Accept: */* Content-type: application/xml Content-Length: 43 < HTTP/1.1 201 Created < Date: Fri, 21 May 2010 15:44:47 GMT < Server: Apache-Coyote/1.1 < Location: http://www.example.com/geoserver/rest/workspaces/test_ws < Content-Length: 0 < Content-Type: text/plain < * Connection #0 to host www.example.com left intact # Successful cURL request to http://www.example.com/geoserver/rest/workspaces * Closing connection #0If the cURL request fails, a code other than 201 will be returned. Here are some possible values:
Code Meaning 0 Couldn’t resolve host; possibly a typo in host name 201 Successful POST 30x Redirect; possibly a typo in the URL 401 Invalid username or password 405 Method not Allowed: check request syntax 500 Geoserver is unable to process the request, e.g. the workspace already exists, the xml is malformed, … For other codes see cURL Error Codes and HTTP Codes.
GET with PHP/cURL¶ The script above can be modified to perform a GET request to obtain the names of all workspaces by replacing the code blocks for required settings, data and return code with the following:
<?php // Required GET request settings // curl_setopt($ch, CURLOPT_GET, True); // CURLOPT_GET is True by default //GET data curl_setopt($ch, CURLOPT_HTTPHEADER, array("Accept: application/xml")); //GET return code $successCode = 200; ?>The logfile should now include lines like:
> GET /geoserver/rest/workspaces HTTP/1.1 < HTTP/1.1 200 OKDELETE with PHP/cURL¶ To delete the (empty) workspace we just created, the script is modified as follows:
<?php $request = "rest/workspaces/test_ws"; // to delete this workspace ?><?php //Required DELETE request settings curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "DELETE"); $passwordStr = "admin:geoserver"; // replace with your username:password curl_setopt($ch, CURLOPT_USERPWD, $passwordStr); //DELETE data curl_setopt($ch, CURLOPT_HTTPHEADER, array("Content-type: application/atom+xml")); //DELETE return code $successCode = 200; ?>The log file will include lines like:
> DELETE /geoserver/rest/workspaces/test_ws HTTP/1.1 < HTTP/1.1 200 OKPython¶
We are looking for volunteers to flesh out this section with examples.
In the meantime, anyone looking to do python scripting of the GeoServer REST config API should use gsconfig.py. It is quite capable, and is used in production as part of GeoNode, so examples can be found in that codebase. Documentation and examples can be found at the section GeoNode’s Ad-Hoc API.
Java¶
We are looking for volunteers to flesh out this section with examples.
In the meantime, anyone looking to do Java scripting of the GeoServer REST API should use GeoServer Manager, a REST client library with minimal dependencies on external libraries.
Another option is gsrcj. This project is a GeoServer REST client written in Java with no extra dependencies on GeoServer/GeoTools, unlike the standard GeoServer REST module. The project has minimal documentation, but does include a Quickstart.
Ruby¶
The examples in this section use rest-client, a REST client for Ruby. There is also a project to create a GeoServer-specific REST client in Ruby: RGeoServer.
Once installed on a system,
rest-client
can be included in a Ruby script by addingrequire 'rest-client'
.GET and PUT Settings¶ Note
In order to execute the example just copy the script content into a
test.ruby
file and execute the following command:$ ruby test.ruby
This example shows how to read the settings using GET, make a change and then use PUT to write the change to the server.
require 'json' require 'rest-client' url = 'http://admin:geoserver@localhost:8080/geoserver/rest/' # get the settings and parse the JSON into a Hash json_text = RestClient.get(url + 'settings.json') settings = JSON.parse(json_text) # settings can be found with the appropriate keys global_settings = settings["global"] jai_settings = global_settings["jai"] # change a value jai_settings["allowInterpolation"] = true # put changes back to the server RestClient.put(url + 'settings, settings.to_json, :content_type => :json)
GeoServer Importer¶
The Importer extension gives a GeoServer administrator an alternate, more-streamlined method for uploading and configuring new layers.
There are two primary advantages to using the Importer over the standard GeoServer data-loading workflow:
- Supports batch operations (loading and publishing multiple spatial files or database tables in one operation)
- Creates unique styles for each layer, rather than linking to the same (existing) styles.
This section will discuss the Importer extension.
Installing the Importer extension¶
The Importer extension is an official extension, available on the GeoServer download page.
Download the extension for your version of GeoServer. (If you see an option, select Core.)
Warning
Make sure to match the version of the extension to the version of GeoServer.
Extract the archive and copy the contents into the GeoServer
WEB-INF/lib
directory.Restart GeoServer.
To verify that the extension was installed successfully, open the web_admin and look for an Import Data option in the Data section on the left-side menu.
Importer extension successfully installed.
For additional information please see the section on Using the Importer extension.
Using the Importer extension¶
Here are step-by-step instructions to import multiple shapefiles in one operation. For more details on different types of operations, please see the Importer interface reference
Find a directory of shapefiles and copy into your data_directory.
Note
You can always use the Natural Earth Quickstart data for this task.
Log in as an administrator and navigate to the Data –> Import Data page.
For select Spatial Files as the data source.
Data source
Click Browse to navigate to the directory of shapefiles to be imported.
The web-based file browser will show as options your home directory, data directory, and the root of your file system (or drive). In this case, select Data directory
Directory
Back on the main form, select Create new next to Workspace, and enter ne to denote the workspace.
Note
Make sure the Store field reads Create new as well.
Import target workspace
Click Next to start the import process.
On the next screen, any layers available for import will be shown.
Note
Non-spatial files will be ignored.
Import layer list
In most cases, all files will be ready for import, but if the the spatial reference system (SRS) is not recognized, you will need to manually input this but clicking Advanced
Note
You will need to manually input the SRS if you used the Natural Earth data above. For each layer click on Advanced and set reprojection to EPSG:4326.
Advanced import settings
Check the box next to each layer you wish to import.
Setting the layers to import
When ready, click Import.
Warning
Don’t click Done at this point, otherwise the import will be canceled.
The results of the import process will be shown next to each layer.
When finished, click Done.
Note
Recent import processes are listed at the bottom of the page. You may wish to visit these pages to check if any difficulties were encountered during the import process or import additional layers.
Recent imports
Importer interface reference¶
The Layer Importer user interface is a component of the GeoServer web interface. You can access it from the GeoServer web interface by clicking the Import Data link, found on the left side of the screen after logging in.
Data sources page¶
The front page of the Layer Importer is where the data source and format are set. The following options are displayed:
Select one of the following data sources to use for the import:
- Spatial Files (see Supported data formats for more details)
- PostGIS database
- Oracle database
- SQL Server database

Choose a data source
The contents of the next section is dependent on the data source chosen here.
There is a single box for selecting a file or directory. Click the Browse link to bring up a file chooser. To select a file, click on it. To select a directory, click on a directory name to open it and then click OK.

Spatial file data source

File chooser for selecting spatial files
Fill out fields for Connection type (Default or JNDI) Host, Port, Database name, Schema, Username to connect with, and Password.
There are also advanced connection options, which are common to the standard PostGIS store loading procedure. (See the PostGIS data store page in the GeoServer reference documentation.)

PostGIS data source connection
The parameter fields for the Oracle import are identical to that of PostGIS. The fields aren’t populated with default credentials with the exception of the port, which is set to 1521 by default.
Note
This option is only enabled if the Oracle extension is installed.

Oracle data source connection
The parameter fields for the SQL Server import are identical to that of PostGIS. The fields aren’t populated with default credentials with the exception of the port, which is set to 4866 by default.
Note
This option is only enabled if the SQL Server extension is installed.

SQL Server data source connection
This area specifies where in the GeoServer catalog the new data source will be stored. This does not affect file placement.
Select the name of an existing workspace and store.

Target workspace and store in GeoServer
Alternately, select Create New and type in a names for a new workspace or store. During the import process, these will be created.

Creating a new workspace and store
This section will list previous imports, and whether they were successful or not. Items can be removed from this list with the Remove button, but otherwise cannot be edited.

Recent imports
When ready to continue to the next page, click Next.
Layer listing page¶
On the next page will be a list of layers found by the Layer Importer. The layers will be named according to the source content’s name (file name of database table name). For each entry there will be a Status showing if the source is ready to be imported.
All layers will be selected for import by default, but can be deselected here by unchecking the box next to each entry.

List of layers to be imported
A common issue during the import process is when a CRS cannot be determined for a given layer. In this case, a dialog box will display where the CRS can be declared explicitly. Enter the CRS and Click Apply.

Declaring a CRS
When ready to perform the import, click Import.
Each selected layer will be added to the GeoServer catalog inside a new or existing store, and published as a layer.
After the import is complete the status area will refresh showing if the import was successful for each layer. If successful, a dialog box for previewing the layer will be displayed, with options for Layer Preview (OpenLayers), Google Earth, and GeoExplorer.

Layers successfully imported
Advanced import settings page¶
The Advanced link next to each layer will lead to the Advanced import settings page.
On this page, data can be set to be reprojected from one CRS to another during the import process. To enable reprojection, select the Reprojection box, and enter the source and target CRS.
In addition, on this page attributes can be renamed and their type changed. Click on the Add link under Attribute Remapping to select the attribute to alter, its type, and its new name. Click Apply when done.
Click Save when finished.

Advanced layer list page
Supported data formats¶
The importer supports any format that GeoServer can use a data store or coverage store. These include the most commonly used formats:
- Shapefile
- GeoTIFF
And a few additional formats:
- CSV
- KML
The following databases are supported:
- PostGIS
- Oracle
- Microsoft SQL Server
Note
Oracle and SQL Server require extra drivers to be installed.
- Install instructions for Oracle
- Install instructions for SQL Server
REST API¶
Importer concepts¶
The importer REST api is built around a tree of objects representing a single import, structured as follows:
- import
- target workspace
- data
- task (one or more)
- data
- layer
- transformation (one or more)
An import refers to the top level object and is a “session” like entity the state of the entire import. It maintains information relevant to the import as a whole such as user infromation, timestamps along with optional information that is uniform along all tasks, such as a target workspace, the shared input data (e.g., a directory, a database). An import is made of any number of task objects.
A data is the description of the source data of a import (overall) or a task. In case the import has a global data definition, this normally refers to an aggregate store such as a directory or a database, and the data associated to the tasks refers to a single element inside such aggregation, such as a single file or table.
A task represents a unit of work to the importer needed to register one new layer, or alter an existing one, and contains the following information:
- The data being imported
- The target store that is the destination of the import
- The target layer
- The data of a task, referred to as its source, is the data to be processed as part of the task.
- The transformations that we need to apply to the data before it gets imported
This data comes in a variety of forms including:
- A spatial file (Shapefile, GeoTiff, KML, etc…)
- A directory of spatial files
- A table in a spatial database
- A remote location that the server will download data from
A task is classified as either “direct” or “indirect”. A direct task is one in which the data being imported requires no transformation to be imported. It is imported directly. An example of such a task is one that involves simply importing an existing Shapefile as is. An indirect task is one that does require a transformation to the original import data. An example of an indirect task is one that involves importing a Shapefile into an existing PostGIS database. Another example of indirect task might involve taking a CSV file as an input, turning a x and y column into a Point, remapping a string column into a timestamp, and finally import the result into a PostGIS.
REST API Reference¶
Method | Action | Status Code/Headers | Input | Output | Parameters |
---|---|---|---|---|---|
GET | Retrieve all imports | 200 | n/a | Import Collection | n/a |
POST | Create a new import | 201 with Location header | n/a | Imports | async=false/true |
GET /imports
results in:
Status: 200 OK
Content-Type: application/json
{
"imports": [{
"id": 0,
"state": "COMPLETE",
"href": "http://localhost:8080/geoserver/rest/imports/0"
}, {
"id": 1,
"state": "PENDING",
"href": "http://localhost:8080/geoserver/rest/imports/1"
}]
}
Posting to the /imports path a import json object creates a new import session:
Content-Type: application/json
{
"import": {
"targetWorkspace": {
"workspace": {
"name": "scratch"
}
},
"targetStore": {
"dataStore": {
"name": "shapes"
}
},
"data": {
"type": "file",
"file": "/data/spearfish/archsites.shp"
}
}
}
The parameters are:
Name | Optional | Description |
---|---|---|
targetWorkspace | Y | The target workspace to import to |
targetStore | Y | The target store to import to |
data | Y | The data to be imported |
The mere creation does not start the import, but it may automatically populate its tasks depending on the target. For example, by referring a directory of shapefiles to be importer, the creation will automatically fill in a task to import each of the shapefiles as a new layer.
The response to the above POST request will be:
Status: 201 Created
Location: http://localhost:8080/geoserver/rest/imports/2
Content-Type: application/json
{
"import": {
"id": 2,
"href": "http://localhost:8080/geoserver/rest/imports/2",
"state": "READY",
"targetWorkspace": {
"workspace": {
"name": "scratch"
}
},
"targetStore": {
"dataStore": {
"name": "shapes",
"type": "PostGIS"
}
},
"data": {
"type": "file",
"format": "Shapefile",
"href": "http://localhost:8080/geoserver/rest/imports/2/data",
"file": "archsites.shp"
},
"tasks": [
{
"id": 0,
"href": "http://localhost:8080/geoserver/rest/imports/2/tasks/0",
"state": "READY"
}
]
}
}
The operation of populating the tasks can require time, especially if done against a large set of
files, or against a “remote” data (more on this later), in this case the POST request can include ?async=true
at the end of the URL to make the importer run it asynchronously.
In this case the import will be created in INIT state and will remain in such state until all
the data transfer and task creation operations are completed. In case of failure to fetch data
the import will immediately stop, the state will switch to the INIT_ERROR
state,
and a error message will appear in the import context “message” field.
Method | Action | Status Code/Headers | Input | Output | Parameters |
---|---|---|---|---|---|
GET | Retrieve import with id <importId> | 200 | n/a | Imports | n/a |
POST | Execute import with id <importId> | 204 | n/a | n/a | async=true/false |
PUT | Create import with proposed id <importId>. If the proposed id is ahead of the current (next) id, the current id will be advanced. If the proposed id is less than or equal to the current id, the current will be used. This allows an external system to dictate the id management. | 201 with Location header | n/a | Imports | n/a |
DELETE | Remove import with id <importId> | 200 | n/a | n/a | n/a |
The representation of a import is the same as the one contained in the import creation response.
The execution of a import can be a long task, as such, it’s possible to add async=true
to the
request to make it run in a asynchronous fashion, the client will have to poll the import representation
and check when it reaches the “COMPLETE” state.
A import can have a “data” representing the source of the data to be imported. The data can be of different types, in particular, “file”, “directory”, “mosaic”, “database” and “remote”. During the import initialization the importer will scan the contents of said resource, and generate import tasks for each data found in it.
Most data types are discussed in the task section, the only type that’s specific to the whole import context is the “remote” one, that is used to ask the importer to fetch the data from a remote location autonomusly, without asking the client to perform an upload.
The representation of a remote resource looks as follows:
"data": {
"type": "remote",
"location": "ftp://fthost/path/to/importFile.zip",
"username": "user",
"password": "secret",
"domain" : "mydomain"
}
The location can be any URI supported by Commons VFS,
including HTTP and FTP servers. The username
, password
and domain
elements are all optional,
and required only if the remote server demands an authentication of sorts.
In case the referred file is compressed, it will be unpacked as the download completes, and the
tasks will be created over the result of unpacking.
>>>>>>> 296f581… [GEOS-7049] Allow autonomous and asynchonous data fetching in importer REST api Tasks ^^^^^
Method | Action | Status Code/Headers | Input | Output |
---|---|---|---|---|
GET | Retrieve all tasks for import with id <importId> | 200 | n/a | Task Collection |
POST | Create a new task | 201 with Location header | Multipart form data | Tasks |
GET /imports/0/tasks
Results in:
Status: 200 OK
Content-Type: application/json
{
"tasks": [
{
"id": 0,
"href": "http://localhost:8080/geoserver/rest/imports/2/tasks/0",
"state": "READY"
}
]
}
A new task can be created by issuing a POST to imports/<importId>/tasks
as a “Content-type: multipart/form-data” multipart encoded data as defined by RFC 2388.
One or more file can be uploaded this way, and a task will be created for importing them. In case the file being uploaded is a zip file, it will be unzipped on the server side and treated as a directory of files.
The response to the upload will be the creation of a new task, for example:
Status: 201 Created
Location: http://localhost:8080/geoserver/rest/imports/1/tasks/1
Content-type: application/json
{
"task": {
"id": 1,
"href": "http://localhost:8080/geoserver/rest/imports/2/tasks/1",
"state": "READY",
"updateMode": "CREATE",
"data": {
"type": "file",
"format": "Shapefile",
"href": "http://localhost:8080/geoserver/rest/imports/2/tasks/1/data",
"file": "bugsites.shp"
},
"target": {
"href": "http://localhost:8080/geoserver/rest/imports/2/tasks/1/target",
"dataStore": {
"name": "shapes",
"type": "PostGIS"
}
},
"progress": "http://localhost:8080/geoserver/rest/imports/2/tasks/1/progress",
"layer": {
"name": "bugsites",
"href": "http://localhost:8080/geoserver/rest/imports/2/tasks/1/layer"
},
"transformChain": {
"type": "vector",
"transforms": []
}
}
}
This creation mode assumes the POST to imports/<importId>/tasks
of form url encoded data containing a url
parameter:
Content-type: application/x-www-form-urlencoded
url=file:///data/spearfish/
The creation response will be the same as the multipart upload.
Method | Action | Status Code/Headers | Input | Output |
---|---|---|---|---|
GET | Retrieve task with id <taskId> within import with id <importId> | 200 | n/a | Task |
PUT | Modify task with id <taskId> within import with id <importId> | 200 | Task | Task |
DELETE | Remove task with id <taskId> within import with id <importId> | 200 | n/a | n/a |
The representation of a task resource is the same one reported in the task creation response.
A PUT request over an existing task can be used to update its representation. The representation can be partial, and just contains the elements that need to be updated.
The updateMode of a task normally starts as “CREATE”, that is, create the target resource if missing. Other possible values are “REPLACE”, that is, delete the existing features in the target layer and replace them with the task source ones, or “APPEND”, to just add the features from the task source into an existing layer.
The following PUT request updates a task from “CREATE” to “APPEND” mode:
Content-Type: application/json
{
"task": {
"updateMode": "APPEND"
}
}
The following operations are specific to data objects of type directory
.
Method | Action | Status Code/Headers | Input | Output |
---|---|---|---|---|
GET | Retrieve the list of files for a task with id <taskId> within import with id <importId> | 200 | n/a | Task |
The response to a GET request will be:
Status: 200 OK
Content-Type: application/json
{
files: [
{
file: "tasmania_cities.shp",
href: "http://localhost:8080/geoserver/rest/imports/0/tasks/0/data/files/tasmania_cities.shp"
},
{
file: "tasmania_roads.shp",
href: "http://localhost:8080/geoserver/rest/imports/0/tasks/0/data/files/tasmania_roads.shp"
},
{
file: "tasmania_state_boundaries.shp",
href: "http://localhost:8080/geoserver/rest/imports/0/tasks/0/data/files/tasmania_state_boundaries.shp"
},
{
file: "tasmania_water_bodies.shp",
href: "http://localhost:8080/geoserver/rest/imports/0/tasks/0/data/files/tasmania_water_bodies.shp"
}
]
}
Method | Action | Status Code/Headers | Input | Output |
---|---|---|---|---|
GET | Retrieve the file with id <fileId> from the data of a task with id <taskId> within import with id <importId> | 200 | n/a | Task |
DELETE | Remove a specific file from the task with id <taskId> within import with id <importId> | 200 | n/a | n/a |
Following the links we’ll get to the representation of a single file, notice how in this case a main file can be associate to sidecar files:
Status: 200 OK
Content-Type: application/json
{
type: "file",
format: "Shapefile",
location: "C:\devel\gs_data\release\data\taz_shapes",
file: "tasmania_cities.shp",
href: "http://localhost:8080/geoserver/rest/imports/0/tasks/0/data/files/tasmania_cities.shp",
prj: "tasmania_cities.prj",
other: [
"tasmania_cities.dbf",
"tasmania_cities.shx"
]
}
In case the input data is of mosaic
type, we have all the attributes typical of a directory, plus support
for directly specifying the timestamp of a particular granule.
In order to specify the timestamp a PUT request can be issued against the granule:
Content-Type: application/json
{
"timestamp": "2004-01-01T00:00:00.000+0000"
}
and the response will be:
Status: 200 OK
Content-Type: application/json
{
"type": "file",
"format": "GeoTIFF",
"href": "http://localhost:8080/geoserver/rest/imports/0/tasks/0/data/files/bm_200401.tif",
"location": "/data/bluemarble/mosaic",
"file": "bm_200401.tiff",
"prj": null,
"other": [],
"timestamp": "2004-01-01T00:00:00.000+0000"
}
The following operations are specific to data objects of type database
. At the time or writing, the REST API does not allow
the creation of a database data source, but it can provide a read only description of one that has been created using the GUI.
Method | Action | Status Code/Headers | Input | Output |
---|---|---|---|---|
GET | Retrieve the database connection parameters for a task with id <taskId> within import with id <importId> | 200 | n/a | List of database connection parameters and available tables |
Performing a GET on a database type data will result in the following response:
{
type: "database",
format: "PostGIS",
href: "http://localhost:8080/geoserver/rest/imports/0/data",
parameters: {
schema: "public",
fetch size: 1000,
validate connections: true,
Connection timeout: 20,
Primary key metadata table: null,
preparedStatements: true,
database: "gttest",
port: 5432,
passwd: "cite",
min connections: 1,
dbtype: "postgis",
host: "localhost",
Loose bbox: true,
max connections: 10,
user: "cite"
},
tables: [
"geoline",
"geopoint",
"lakes",
"line3d",
]
}
The following operations are specific to data objects of type table
. At the time or writing, the REST API does not allow
the creation of a database data source, but it can provide a read only description of one that has been created using the GUI.
A table description is normally linked to task, and refers to a database data linked to the overall import.
Method | Action | Status Code/Headers | Input | Output |
---|---|---|---|---|
GET | Retrieve the table description for a task with id <taskId> within import with id <importId> | 200 | n/a | A table representation |
Performing a GET on a database type data will result in the following response:
{
type: "table",
name: "abc",
format: "PostGIS",
href: "http://localhost:8080/geoserver/rest/imports/0/tasks/0/data"
}
The layer defines how the target layer will be created
Method | Action | Status Code/Headers | Input | Output |
---|---|---|---|---|
GET | Retrieve the layer of a task with id <taskId> within import with id <importId> | 200 | n/a | A layer JSON representation |
PUT | Modify the target layer for a task with id <taskId> within import with id <importId> | 200 | Task | Task |
Requesting the task layer will result in the following:
Status: 200 OK
Content-Type: application/json
{
layer: {
name: "tasmania_cities",
href: "http://localhost:8080/geoserver/rest/imports/0/tasks/0/layer",
title: "tasmania_cities",
originalName: "tasmania_cities",
nativeName: "tasmania_cities",
srs: "EPSG:4326",
bbox: {
minx: 147.2909004483,
miny: -42.85110181689001,
maxx: 147.2911004483,
maxy: -42.85090181689,
crs: "GEOGCS["WGS 84", DATUM["World Geodetic System 1984", SPHEROID["WGS 84", 6378137.0, 298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich", 0.0, AUTHORITY["EPSG","8901"]], UNIT["degree", 0.017453292519943295], AXIS["Geodetic longitude", EAST], AXIS["Geodetic latitude", NORTH], AUTHORITY["EPSG","4326"]]"
},
attributes: [
{
name: "the_geom",
binding: "com.vividsolutions.jts.geom.MultiPoint"
},
{
name: "CITY_NAME",
binding: "java.lang.String"
},
{
name: "ADMIN_NAME",
binding: "java.lang.String"
},
{
name: "CNTRY_NAME",
binding: "java.lang.String"
},
{
name: "STATUS",
binding: "java.lang.String"
},
{
name: "POP_CLASS",
binding: "java.lang.String"
}
],
style: {
name: "cite_tasmania_cities",
href: "http://localhost:8080/geoserver/rest/imports/0/tasks/0/layer/style"
}
}
}
All the above attributes can be updated using a PUT request. Even if the above representation is similar to the REST config API, it should not be confused with it, as it does not support all the same properties, in particular the supported properties are all the ones listed above.
Method | Action | Status Code/Headers | Input | Output |
---|---|---|---|---|
GET | Retrieve the list of transformations of a task with id <taskId> within import with id <importId> | 200 | n/a | A list of transfromations in JSON format |
POST | Create a new transormation and append it inside a task with id <taskId> within import with id <importId> | 201 | A JSON transformation representation | The transform location |
A GET request for the list of transformations will result in the following response:
Status: 200 OK
Content-Type: application/json
{
"transforms": [
{
"type": "ReprojectTransform",
"href": "http://localhost:8080/geoserver/rest/imports/0/tasks/1/transforms/0",
"source": null,
"target": "EPSG:4326"
},
{
"type": "DateFormatTransform",
"href": "http://localhost:8080/geoserver/rest/imports/0/tasks/1/transforms/1",
"field": "date",
"format": "yyyyMMdd"
}
]
}
Creating a new transformation requires posting a JSON document with a type
property identifying the class of the
transformation, plus any extra attribute required by the transformation itself (this is transformation specific, each one will use a different set of attributes).
The following POST request creates an attribute type remapping:
Content-Type: application/json
{
"type": "AttributeRemapTransform",
"field": "cat",
"target": "java.lang.Integer"
}
The response will be:
Status: 201 OK
Location: http://localhost:8080/geoserver/rest/imports/0/tasks/1/transform/2
Method | Action | Status Code/Headers | Input | Output |
---|---|---|---|---|
GET | Retrieve a transformation identified by <transformId> inside a task with id <taskId> within import with id <importId> | 200 | n/a | A single transformation in JSON format |
PUT | Modifies the definition of a transformation identified by <transformId> inside a task with id <taskId> within import with id <importId> | 200 | A JSON transformation representation (eventually just the portion of it that needs to be modified) | The full transformation representation |
DELETE | Removes the transformation identified by <transformId> inside a task with id <taskId> within import with id <importId> | 200 | A JSON transformation representation (eventually just the portion of it that needs to be modified) | The full transformation representation |
Requesting a single transformation by identifier will result in the following response:
Status: 200 OK
Content-Type: application/json
{
"type": "ReprojectTransform",
"href": "http://localhost:8080/geoserver/rest/imports/0/tasks/1/transforms/0",
"source": null,
"target": "EPSG:4326"
}
Assuming we have a reprojection transformation, and that we need to change the target SRS type, the following PUT request will do the job:
Content-Type: application/json
{
"type": "ReprojectTransform",
"target": "EPSG:3005"
}
The response will be:
Status: 200 OK
Content-Type: application/json
{
"type": "ReprojectTransform",
"href": "http://localhost:8080/geoserver/rest/imports/0/tasks/1/transform/0",
"source": null,
"target": "EPSG:3005"
}
Remaps a certain field to a given target data type
Parameter | Optional | Description |
---|---|---|
field | N | The name of the field to be remapped |
target | N | The “target” field type, as a fully qualified Java class name |
Transforms two numeric fields latField
and lngField
into a point geometry representation POINT(lngField,latField)
, the source fields will be removed.
Parameter | Optional | Description |
---|---|---|
latField | N | The “latitude” field |
lngField | N | The “longitude” field |
For database targets only, creates an index on a given column after importing the data into the database
Parameter | Optional | Description |
---|---|---|
field | N | The field to be indexed |
Parses a string representation of a date into a Date/Timestamp object
Parameter | Optional | Description |
---|---|---|
field | N | The field to be parsed |
format | Y | A date parsing pattern, setup using the Java SimpleDateFormat syntax. In case it’s missing, a number of built-in formats will be tried instead (short and full ISO date formats, dates without any separators). |
Takes a integer field and transforms it to a date, interpreting the intereger field as a date
Parameter | Optional | Description |
---|---|---|
field | N | The field containing the year information |
Reprojects a vector layer from a source CRS to a target CRS
Parameter | Optional | Description |
---|---|---|
source | Y | Identifier of the source coordinate reference system (the native one will be used if missing) |
target | N | Identifier of the target coordinate reference system |
Applies gdal_translate
to a single file raster input. Requires gdal_translate
to be inside the PATH used by the web container running GeoServer.
Parameter | Optional | Description |
---|---|---|
options | N | Array of options that will be passed to gdal_translate (beside the input and output names, which are internally managed) |
Applies gdalwarp
to a single file raster input. Requires gdalwarp
to be inside the PATH used by the web container running GeoServer.
Parameter | Optional | Description |
---|---|---|
options | N | Array of options that will be passed to gdalwarp (beside the input and output names, which are internally managed) |
Applies gdaladdo
to a single file raster input. Requires gdaladdo
to be inside the PATH used by the web container running GeoServer.
Parameter | Optional | Description |
---|---|---|
options | N | Array of options that will be passed to gdaladdo (beside the input file name, which is internally managed) |
levels | N | Array of integers with the overview levels that will be passed to gdaladdo |
Importer REST API examples¶
Mass configuring a directory of shapefiles¶
In order to initiate an import of the c:\data\tasmania
directory into the existing tasmania
workspace the following JSON will be POSTed to GeoServer:
{
"import": {
"targetWorkspace": {
"workspace": {
"name": "tasmania"
}
},
"data": {
"type": "directory",
"location": "C:/data/tasmania"
}
}
}
This curl command can be used for the purpose:
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @import.json "http://localhost:8080/geoserver/rest/imports"
The importer will locate the files to be imported, and automatically prepare the tasks, returning the following response:
{
"import": {
"id": 9,
"href": "http://localhost:8080/geoserver/rest/imports/9",
"state": "PENDING",
"archive": false,
"targetWorkspace": {
"workspace": {
"name": "tasmania"
}
},
"data": {
"type": "directory",
"format": "Shapefile",
"location": "C:\\data\\tasmania",
"href": "http://localhost:8080/geoserver/rest/imports/9/data"
},
"tasks": [
{
"id": 0,
"href": "http://localhost:8080/geoserver/rest/imports/9/tasks/0",
"state": "READY"
},
{
"id": 1,
"href": "http://localhost:8080/geoserver/rest/imports/9/tasks/1",
"state": "READY"
},
{
"id": 2,
"href": "http://localhost:8080/geoserver/rest/imports/9/tasks/2",
"state": "READY"
},
{
"id": 3,
"href": "http://localhost:8080/geoserver/rest/imports/9/tasks/3",
"state": "READY"
}
]
}
}
After checking every task is ready, the import can be initiated by executing a POST on the import resource:
curl -u admin:geoserver -XPOST "http://localhost:8080/geoserver/rest/imports/9"
The resource can then be monitored for progress, and eventually final results:
curl -u admin:geoserver -XGET "http://localhost:8080/geoserver/rest/imports/9"
Which in case of successful import will look like:
{
"import": {
"id": 9,
"href": "http://localhost:8080/geoserver/rest/imports/9",
"state": "COMPLETE",
"archive": false,
"targetWorkspace": {
"workspace": {
"name": "tasmania"
}
},
"data": {
"type": "directory",
"format": "Shapefile",
"location": "C:\\data\\tasmania",
"href": "http://localhost:8080/geoserver/rest/imports/9/data"
},
"tasks": [
{
"id": 0,
"href": "http://localhost:8080/geoserver/rest/imports/9/tasks/0",
"state": "COMPLETE"
},
{
"id": 1,
"href": "http://localhost:8080/geoserver/rest/imports/9/tasks/1",
"state": "COMPLETE"
},
{
"id": 2,
"href": "http://localhost:8080/geoserver/rest/imports/9/tasks/2",
"state": "COMPLETE"
},
{
"id": 3,
"href": "http://localhost:8080/geoserver/rest/imports/9/tasks/3",
"state": "COMPLETE"
}
]
}
}
Configuring a shapefile with no projection information¶
In this case, let’s assume we have a single shapefile, tasmania_cities.shp, that does not have the .prj anciliary file (the example is equally good for any case where the prj file contents cannot be matched to an official EPSG code).
We are going to post the following import definition:
{
"import": {
"targetWorkspace": {
"workspace": {
"name": "tasmania"
}
},
"data": {
"type": "file",
"file": "C:/data/tasmania/tasmania_cities.shp"
}
}
}
With the usual curl command:
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @import.json "http://localhost:8080/geoserver/rest/imports"
The response in case the CRS is missing will be:
{
"import": {
"id": 13,
"href": "http://localhost:8080/geoserver/rest/imports/13",
"state": "PENDING",
"archive": false,
"targetWorkspace": {
"workspace": {
"name": "tasmania"
}
},
"data": {
"type": "file",
"format": "Shapefile",
"file": "tasmania_cities.shp"
},
"tasks": [
{
"id": 0,
"href": "http://localhost:8080/geoserver/rest/imports/13/tasks/0",
"state": "NO_CRS"
}
]
}
}
Drilling into the task layer we can see the srs information is missing:
{
"layer": {
"name": "tasmania_cities",
"href": "http://localhost:8080/geoserver/rest/imports/13/tasks/0/layer",
"title": "tasmania_cities",
"originalName": "tasmania_cities",
"nativeName": "tasmania_cities",
"bbox": {
"minx": 146.2910004483,
"miny": -43.85100181689,
"maxx": 148.2910004483,
"maxy": -41.85100181689
},
"attributes": [
{
"name": "the_geom",
"binding": "com.vividsolutions.jts.geom.MultiPoint"
},
{
"name": "CITY_NAME",
"binding": "java.lang.String"
},
{
"name": "ADMIN_NAME",
"binding": "java.lang.String"
},
{
"name": "CNTRY_NAME",
"binding": "java.lang.String"
},
{
"name": "STATUS",
"binding": "java.lang.String"
},
{
"name": "POP_CLASS",
"binding": "java.lang.String"
}
],
"style": {
"name": "tasmania_tasmania_cities2",
"href": "http://localhost:8080/geoserver/rest/imports/13/tasks/0/layer/style"
}
}
}
The following PUT request will update the SRS:
curl -u admin:geoserver -XPUT -H "Content-type: application/json" -d @layerUpdate.json "http://localhost:8080/geoserver/rest/imports/13/tasks/0/layer/"
Where layerUpdate.json
is:
{
layer : {
srs: "EPSG:4326"
}
}
Getting the import definition again, we’ll find it ready to execute:
{
"import": {
"id": 13,
"href": "http://localhost:8080/geoserver/rest/imports/13",
"state": "PENDING",
"archive": false,
"targetWorkspace": {
"workspace": {
"name": "tasmania"
}
},
"data": {
"type": "file",
"format": "Shapefile",
"file": "tasmania_cities.shp"
},
"tasks": [
{
"id": 0,
"href": "http://localhost:8080/geoserver/rest/imports/13/tasks/0",
"state": "READY"
}
]
}
}
A POST request will make it execute:
curl -u admin:geoserver -XPOST "http://localhost:8080/geoserver/rest/imports/13"
And eventually succeed:
{
"import": {
"id": 13,
"href": "http://localhost:8080/geoserver/rest/imports/13",
"state": "COMPLETE",
"archive": false,
"targetWorkspace": {
"workspace": {
"name": "tasmania"
}
},
"data": {
"type": "file",
"format": "Shapefile",
"file": "tasmania_cities.shp"
},
"tasks": [
{
"id": 0,
"href": "http://localhost:8080/geoserver/rest/imports/13/tasks/0",
"state": "COMPLETE"
}
]
}
}
Uploading a CSV file to PostGIS while transforming it¶
A remote sensing tool is generating CSV files with some locations and measurements, that we want to upload into PostGIS as a new spatial table. The CSV file looks as follows:
AssetID, SampleTime, Lat, Lon, Value
1, 2015-01-01T10:00:00, 10.00, 62.00, 15.2
1, 2015-01-01T11:00:00, 10.10, 62.11, 30.25
1, 2015-01-01T12:00:00, 10.20, 62.22, 41.2
1, 2015-01-01T13:00:00, 10.31, 62.33, 27.6
1, 2015-01-01T14:00:00, 10.41, 62.45, 12
First, we are going to create a empty import with an existing postgis store as the target:
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @import.json "http://localhost:8080/geoserver/rest/imports"
Where import.json is:
{
"import": {
"targetWorkspace": {
"workspace": {
"name": "topp"
}
},
"targetStore": {
"dataStore": {
"name": "gttest"
}
}
}
}
Then, we are going to POST the csv file to the tasks list, in order to create an import task for it:
curl -u admin:geoserver -F name=test -F filedata=@values.csv "http://localhost:8080/geoserver/rest/imports/0/tasks"
And we are going to get back a new task definition, with a notification that the CRS is missing:
{
"task": {
"id": 0,
"href": "http://localhost:8080/geoserver/rest/imports/16/tasks/0",
"state": "NO_CRS",
"updateMode": "CREATE",
"data": {
"type": "file",
"format": "CSV",
"file": "values.csv"
},
"target": {
"href": "http://localhost:8080/geoserver/rest/imports/16/tasks/0/target",
"dataStore": {
"name": "values",
"type": "CSV"
}
},
"progress": "http://localhost:8080/geoserver/rest/imports/16/tasks/0/progress",
"layer": {
"name": "values",
"href": "http://localhost:8080/geoserver/rest/imports/16/tasks/0/layer"
},
"transformChain": {
"type": "vector",
"transforms": [
]
}
}
}
As before, we are going to force the CRS by updating the layer:
curl -u admin:geoserver -XPUT -H "Content-type: application/json" -d @layerUpdate.json "http://localhost:8080/geoserver/rest/imports/0/tasks/0/layer/"
Where layerUpdate.json
is:
{
layer : {
srs: "EPSG:4326"
}
}
Then, we are going to create a transformation mapping the Lat/Lon columns to a point:
{
"type": "AttributesToPointGeometryTransform",
"latField": "Lat",
"lngField": "Lon"
}
The above will be uploaded to GeoServer as follows:
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @toPoint.json "http://localhost:8080/geoserver/rest/imports/0/tasks/0/transforms"
Now the import is ready to run, and we’ll execute it using:
curl -u admin:geoserver -XPOST "http://localhost:8080/geoserver/rest/imports/0"
If all goes well the new layer is created in PostGIS and registered in GeoServer as a new layer.
In case the features in the CSV need to be appended to an existing layer a PUT request against the task might be performed, changing its updateMode from “CREATE” to “APPEND”. Changing it to “REPLACE” instead will preserve the layer, but remove the old conents and replace them with the newly uploaded ones.
Uploading and optimizing a GeoTiff with ground control points¶
A data supplier is periodically providing GeoTiffs that we need to configure in GeoServer. The GeoTIFF is referenced via Ground Control Points, is organized by stripes, and has no overviews. The objective is to rectify, optimize and publish it via the importer.
First, we are going to create a empty import with no store as the target:
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @import.json "http://localhost:8080/geoserver/rest/imports"
Where import.json is:
{
"import": {
"targetWorkspace": {
"workspace": {
"name": "sf"
}
}
}
}
Then, we are going to POST the GeoTiff file to the tasks list, in order to create an import task for it:
curl -u admin:geoserver -F name=test -F filedata=@box_gcp_fixed.tif "http://localhost:8080/geoserver/rest/imports/0/tasks"
We are then going to append the transformations to rectify (gdalwarp), retile (gdal_translate) and add overviews (gdaladdo) to it:
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @warp.json "http://localhost:8080/geoserver/rest/imports/0/tasks/0/transforms"
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @gtx.json "http://localhost:8080/geoserver/rest/imports/0/tasks/0/transforms"
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @gad.json "http://localhost:8080/geoserver/rest/imports/0/tasks/0/transforms"
warp.json
is:
{
"type": "GdalWarpTransform",
"options": [ "-t_srs", "EPSG:4326"]
}
gtx.json
is:
{
"type": "GdalTranslateTransform",
"options": [ "-co", "TILED=YES", "-co", "BLOCKXSIZE=512", "-co", "BLOCKYSIZE=512"]
}
gad.json
is:
{
"type": "GdalAddoTransform",
"options": [ "-r", "average"],
"levels" : [2, 4, 8, 16]
}
Now the import is ready to run, and we’ll execute it using:
curl -u admin:geoserver -XPOST "http://localhost:8080/geoserver/rest/imports/0"
A new layer box_gcp_fixed
layer will appear in GeoServer, with an underlying GeoTiff file ready
for web serving.
Adding a new granule into an existing mosaic¶
A data supplier is periodically providing new time based imagery that we need to add into an existing mosaic in GeoServer. The imagery is in GeoTiff format, and lacks a good internal structure, which needs to be aligned with the one into the other images.
First, we are going to create a import with an indication of where the granule is located, and the target store:
curl -u admin:geoserver -XPOST -H “Content-type: application/json” -d @import.json “http://localhost:8080/geoserver/rest/imports”
Where import.json is:
{
"import": {
"targetWorkspace": {
"workspace": {
"name": "topp"
}
},
"data": {
"type": "file",
"file": "/home/aaime/devel/gisData/ndimensional/data/world/world.200407.3x5400x2700.tiff"
},
"targetStore": {
"dataStore": {
"name": "bluemarble"
}
}
}
}
We are then going to append the transformations to harmonize the file with the rest of the mosaic:
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @gtx.json "http://localhost:8080/geoserver/rest/imports/0/tasks/0/transforms"
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @gad.json "http://localhost:8080/geoserver/rest/imports/0/tasks/0/transforms"
gtx.json
is:
{
"type": "GdalTranslateTransform",
"options": [ "-co", "TILED=YES"]
}
gad.json
is:
{
"type": "GdalAddoTransform",
"options": [ "-r", "average"],
"levels" : [2, 4, 8, 16, 32, 64, 128]
}
Now the import is ready to run, and we’ll execute it using:
curl -u admin:geoserver -XPOST "http://localhost:8080/geoserver/rest/imports/0"
The new granule will be ingested into the mosaic, and will thus be available for time based requests.
Asynchronously fetching and importing data from a remote server¶
We assume a remote FTP server contains multiple shapefiles that we need to import in GeoServer as new layers. The files are large, and the server has much better bandwith than the client, so it’s best if GeoServer performs the data fetching on its own.
In this case a asynchronous request using remote
data will be the best fit:
curl -u admin:geoserver -XPOST -H "Content-type: application/json" -d @import.json "http://localhost:8080/geoserver/rest/imports?async=true"
Where import.json is:
{
"import": {
"targetWorkspace": {
"workspace": {
"name": "topp"
}
},
"data": {
"type": "remote",
"location": "ftp://myserver/data/bc_shapefiles",
"username": "dan",
"password": "secret"
}
}
}
The request will return immediately with an import context in “INIT” state, and it will remain in such state until the data is fetched and the tasks created. Once the state switches to “PENDING” the import will be ready for execution. Since there is a lot of shapefiles to process, also the import run will be done in asynchronous mode:
curl -u admin:geoserver -XPOST "http://localhost:8080/geoserver/rest/imports/0?async=true"
The response will return immediately in this case as well, and the progress can be followed as the tasks in the import switch state.
GeoNode’s Ad-Hoc API¶
gsconfig¶
gsconfig is a python library for manipulating a GeoServer instance via the GeoServer RESTConfig API.
The project is distributed under a MIT License .
Installing¶
pip install gsconfig
For developers:
git clone git@github.com:boundlessgeo/gsconfig.git
cd gsconfig
python setup.py develop
Getting Help¶
There is a brief manual at http://boundlessgeo.github.io/gsconfig/ . If you have questions, please ask them on the GeoServer Users mailing list: http://geoserver.org/comm/ .
Please use the Github project at http://github.com/boundlessgeo/gsconfig for any bug reports (and pull requests are welcome, but please include tests where possible.)
Sample Layer Creation Code¶
from geoserver.catalog import Catalog
cat = Catalog("http://localhost:8080/geoserver/")
topp = cat.get_workspace("topp")
shapefile_plus_sidecars = shapefile_and_friends("states")
# shapefile_and_friends should look on the filesystem to find a shapefile
# and related files based on the base path passed in
#
# shapefile_plus_sidecars == {
# 'shp': 'states.shp',
# 'shx': 'states.shx',
# 'prj': 'states.prj',
# 'dbf': 'states.dbf'
# }
# 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes)
# 'workspace' is optional (GeoServer's default workspace is used by... default)
# 'name' is required
ft = cat.create_featurestore(name, workspace=topp, data=shapefile_plus_sidecars)
Running Tests¶
Since the entire purpose of this module is to interact with GeoServer, the test suite is mostly composed of integration tests.
These tests necessarily rely on a running copy of GeoServer, and expect that this GeoServer instance will be using the default data directory that is included with GeoServer.
This data is also included in the GeoServer source repository as /data/release/
.
In addition, it is expected that there will be a postgres database available at postgres:password@localhost:5432/db
.
You can test connecting to this database with the psql
command line client by running $ psql -d db -Upostgres -h localhost -p 5432
(you will be prompted interactively for the password.)
To override the assumed database connection parameters, the following environment variables are supported:
- DATABASE
- DBUSER
- DBPASS
If present, psycopg will be used to verify the database connection prior to running the tests.
If provided, the following environment variables will be used to reset the data directory:
- GEOSERVER_HOME
- Location of git repository to read the clean data from. If only this option is provided git clean will be used to reset the data.
- GEOSERVER_DATA_DIR
- Optional location of the data dir geoserver will be running with. If provided, rsync will be used to reset the data.
- GS_VERSION
- Optional environment variable allowing the catalog test cases to automatically download and start a vanilla GeoServer WAR form the web. Be sure that there are no running services on HTTP port 8080.
Here are the commands that I use to reset before running the gsconfig tests:
$ cd ~/geoserver/src/web/app/
$ PGUSER=postgres dropdb db
$ PGUSER=postgres createdb db -T template_postgis
$ git clean -dxff -- ../../../data/release/
$ git checkout -f
$ MAVEN_OPTS="-XX:PermSize=128M -Xmx1024M" \
GEOSERVER_DATA_DIR=../../../data/release \
mvn jetty:run
At this point, GeoServer will be running foregrounded, but it will take a few seconds to actually begin listening for http requests.
You can stop it with CTRL-C
(but don’t do that until you’ve run the tests!)
You can run the gsconfig tests with the following command:
$ python setup.py test
Instead of restarting GeoServer after each run to reset the data, the following should allow re-running the tests:
$ git clean -dxff -- ../../../data/release/
$ curl -XPOST --user admin:geoserver http://localhost:8080/geoserver/rest/reload
More Examples - Updated for GeoServer 2.4+¶
Loading the GeoServer catalog
using gsconfig
is quite easy. The example below allows you to connect to GeoServer by specifying custom credentials.
from geoserver.catalog import Catalog
cat = Catalog("http://localhost:8080/geoserver/rest/", "admin", "geoserver")
The code below allows you to create a FeatureType from a Shapefile
geosolutions = cat.get_workspace("geosolutions")
import geoserver.util
shapefile_plus_sidecars = geoserver.util.shapefile_and_friends("C:/work/gsconfig/test/data/states")
# shapefile_and_friends should look on the filesystem to find a shapefile
# and related files based on the base path passed in
#
# shapefile_plus_sidecars == {
# 'shp': 'states.shp',
# 'shx': 'states.shx',
# 'prj': 'states.prj',
# 'dbf': 'states.dbf'
# }
# 'data' is required (there may be a 'schema' alternative later, for creating empty featuretypes)
# 'workspace' is optional (GeoServer's default workspace is used by... default)
# 'name' is required
ft = cat.create_featurestore("test", shapefile_plus_sidecars, geosolutions)
It is possible to create JDBC Virtual Layers too. The code below allow to create a new SQL View called my_jdbc_vt_test
defined by a custom sql
.
from geoserver.catalog import Catalog
from geoserver.support import JDBCVirtualTable, JDBCVirtualTableGeometry, JDBCVirtualTableParam
cat = Catalog('http://localhost:8080/geoserver/rest/', 'admin', '****')
store = cat.get_store('postgis-geoserver')
geom = JDBCVirtualTableGeometry('newgeom','LineString','4326')
ft_name = 'my_jdbc_vt_test'
epsg_code = 'EPSG:4326'
sql = 'select ST_MakeLine(wkb_geometry ORDER BY waypoint) As newgeom, assetid, runtime from waypoints group by assetid,runtime'
keyColumn = None
parameters = None
jdbc_vt = JDBCVirtualTable(ft_name, sql, 'false', geom, keyColumn, parameters)
ft = cat.publish_featuretype(ft_name, store, epsg_code, jdbc_virtual_table=jdbc_vt)
This example shows how to easily update a layer
property. The same approach may be used with every catalog
resource
ne_shaded = cat.get_layer("ne_shaded")
ne_shaded.enabled=True
cat.save(ne_shaded)
cat.reload()
Deleting a store
from the catalog
requires to purge all the associated layers
first. This can be done by doing something like this:
st = cat.get_store("ne_shaded")
cat.delete(ne_shaded)
cat.reload()
cat.delete(st)
cat.reload()
There are some functionalities allowing to manage the ImageMosaic
coverages. It is possible to create new ImageMosaics, add granules to them,
and also read the coverages metadata, modify the mosaic Dimensions
and finally query the mosaic granules
and list their properties.
The gsconfig methods map the REST APIs for ImageMosaic
In order to create a new ImageMosaic layer, you can prepare a zip file containing the properties files for the mosaic configuration. Refer to the GeoTools ImageMosaic Plugin guide
in order to get details on the mosaic configuration. The package contains an already configured zip file with two granules.
You need to update or remove the datastore.properties
file before creating the mosaic otherwise you will get an exception.
from geoserver.catalog import Catalog
cat = Catalog("http://localhost:8180/geoserver/rest")
cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip")
By defualt the cat.create_imagemosaic
tries to configure the layer too. If you want to create the store only, you can specify the following parameter
cat.create_imagemosaic("NOAAWW3_NCOMultiGrid_WIND_test", "NOAAWW3_NCOMultiGrid_WIND_test.zip", "none")
In order to retrieve from the catalog the ImageMosaic coverage store you can do this
store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test")
It is possible to add more granules to the mosaic at runtime. With the following method you can add granules already present on the machine local path.
cat.harvest_externalgranule("file://D:/Work/apache-tomcat-6.0.16/instances/data/data/MetOc/NOAAWW3/20131001/WIND/NOAAWW3_NCOMultiGrid__WIND_000_20131001T000000.tif", store)
The method below allows to send granules remotely via POST to the ImageMosaic. The granules will be uploaded and stored on the ImageMosaic index folder.
cat.harvest_uploadgranule("NOAAWW3_NCOMultiGrid__WIND_000_20131002T000000.zip", store)
To delete an ImageMosaic store, you can follow the standard approach, by deleting the layers first. ATTENTION: at this time you need to manually cleanup the data dir from the mosaic granules and, in case you used a DB datastore, you must also drop the mosaic tables.
layer = cat.get_layer("NOAAWW3_NCOMultiGrid_WIND_test")
cat.delete(layer)
cat.reload()
cat.delete(store)
cat.reload()
The method below allows you the load and update the coverage metadata of the ImageMosaic. You need to do this for every coverage of the ImageMosaic of course.
coverage = cat.get_resource_by_url("http://localhost:8180/geoserver/rest/workspaces/natocmre/coveragestores/NOAAWW3_NCOMultiGrid_WIND_test/coverages/NOAAWW3_NCOMultiGrid_WIND_test.xml")
coverage.supported_formats = ['GEOTIFF']
cat.save(coverage)
By default the ImageMosaic layer has not the coverage dimensions configured. It is possible using the coverage metadata to update and manage the coverage dimensions.
ATTENTION: notice that the presentation
parameters accepts only one among the following values {‘LIST’, ‘DISCRETE_INTERVAL’, ‘CONTINUOUS_INTERVAL’}
from geoserver.support import DimensionInfo
timeInfo = DimensionInfo("time", "true", "LIST", None, "ISO8601", None)
coverage.metadata = ({'dirName':'NOAAWW3_NCOMultiGrid_WIND_test_NOAAWW3_NCOMultiGrid_WIND_test', 'time': timeInfo})
cat.save(coverage)
One the ImageMosaic has been configures, it is possible to read the coverages along with their granule schema and granule info.
from geoserver.catalog import Catalog
cat = Catalog("http://localhost:8180/geoserver/rest")
store = cat.get_store("NOAAWW3_NCOMultiGrid_WIND_test")
coverages = cat.mosaic_coverages(store)
schema = cat.mosaic_coverage_schema(coverages['coverages']['coverage'][0]['name'], store)
granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store)
The granules details can be easily read by doing something like this:
granules['crs']['properties']['name']
granules['features']
granules['features'][0]['properties']['time']
granules['features'][0]['properties']['location']
granules['features'][0]['properties']['run']
When the mosaic grows up and starts having a huge set of granules, you may need to filter the granules query through a CQL filter on the coverage schema attributes.
granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z'")
granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "time >= '2013-10-01T03:00:00.000Z' AND run = 0")
granules = cat.mosaic_granules(coverages['coverages']['coverage'][0]['name'], store, "location LIKE '%20131002T000000.tif'")
gsimporter¶
gsimporter is a python library for using GeoServer’s importer API.
Installing¶
pip install gsconfig
or
git clone https://github.com/boundlessgeo/gsimporter.git cd gsimporter pip install .
Getting Help¶
Please use the Github project at http://github.com/boundlessgeo/gsimporter for any bug reports (and pull requests are welcome, but please include tests where possible.)
Running Tests¶
The tests are integration tests. These require having a running GeoServer instance with the community/importer modules installed. Because some of the tests use a postgres database, a data base is required to run. It is strongly advised to run with a data directory you don’t care about.
The test suite will first attempt to verify a connection to GeoServer and a connection to the database. If the default values are not appropriate, provide them via environment variables on the command line or via export. For example:
GEOSERVER_BASE_URL=http://localhost:8080 python setup.py test
A convenient way to deal with connection or other settings (besides setting things up to use the defaults) is to put them all in a bash (or other shell) script.
The tests are designed to create a workspace named importer and importer2 for use in testing. importer will be set to the default workspace. As much as possible, things are cleaned up after test execution.
To run all of the tests, one way is via setup.py. python setup.py test should do the trick.
If developing and finer grained control is desired, specific tests and other flags can be provided using python test/uploadtests.py. Supported arguments are:
- –clean delete layers and stores in the test workspaces. useful for cleanup.
- –skip-teardown don’t delete things after running. may cause errors but useful for a single test.
To run a single case (or drop the method name to run the whole class):
python test/uploadtests.py ErrorTests.test_invalid_file
Testing in GeoNode¶
The community encourages the Test Driven Development (TDD) and the contribution to write new tests to extend the coverage. Ideally every model, view, and utility should becovered by tests.
GeoNode has Unit, Integration and Javascript tests. The Unit tests are located in the tests file of every django app (Maps, Layers, Documents, Catalogue, Search, Security etc).
The Integration, CSW and smoke tests are located under the tests folder).
Warning
The tests are meant to be ran using the SQLite database, some of them may fail using PostgreSQL or others. Therefore remove or rename your local_settings.py file before running the tests.
- If running them in development mode make sure to have the jetty server
- shut down otherwise the test could get stuck. To make sure it is run:
$ paver stop
Unit Tests¶
To run the unit tests make sure you have the virtualenv active (if running GeoNode under virtualenv) then run:
$ paver test # or python setup.py test when testing development versions
This will produce a detailed test report.
It’s possible to run just specific apps tests by using the django command:
$ python manage.py test app/tests.py
For example:
$ python manage.py test geonode.maps.tests
To run a single testcase or method (omit the method name to run the whole class), for example:
$ python manage.py test geonode.maps.tests:MapsTest.test_maps_search
These tests are based on the Python/django unit test suite.
Integration Tests¶
To run the unit tests make sure you have the virtualenv active (if running GeoNode under virtualenv) then run:
$ paver test_integration # or python setup.py test_integration when testing development versions
To run the csw integration test run:
$ paver test_integration -n geonode.tests.csw
Like the unit tests above, it is also possible to test specific modules, for example:
$ paver test_integration -n geonode.tests.integration:GeoNodeMapTest.test_search_result_detail
To test with with coverage:
$ python manage.py test geonode.maps.tests -- --with-coverage --cover-package=geonode.maps
These tests are based on the Python/django unit test suite.
Javascript Tests¶
Note
Javascript tests has been currently disabled in GeoNode. There is a plan to improve and re-enable them in the future.
Contributing to GeoNode¶
Warning
This section is freely adapted from the official GitHub guides.
If you are interested in helping us to make GeoNode, there are many ways to do so.
Participate in the Discussion¶
GeoNode has a mailing list (https://groups.google.com/d/forum/geonode-users) where users can ask and answer questions about the software. There are also IRC chats on Gitter where users (https://gitter.im/GeoNode/general) and developers (https://gitter.im/GeoNode) can discuss GeoNode in real time. Sometimes users also post interesting tips for managing sites running GeoNode. If you want to help out with GeoNode, one easy way is to sign up for the mailing list and help answer questions.
Report Problems on the Issue Tracking System¶
Informative bug reports are a key part of the bug fixing process, so if you do run into a problem with GeoNode, please don’t hesitate to report it on our bug tracker, available online at http://github.com/GeoNode/geonode/issues. Useful information for bug reports includes:
- What were you doing when the bug occurred? Does the problem occur every time you do that, or only occasionally?
- What were you expecting to happen? What happened instead?
- What version of the software are you using? Please also note any changes from the default configuration.
- If there is a data file involved in the bug (such as a Shapefile that doesn’t render properly), please consider including it in the bug report. Be aware that not all data files are freely distributable.
To help GeoNode to better address the issue you can tag the ticket with one or more lables that you can find on the side column.
Write Documentation¶
GeoNode’s documentation can always use improvement - there are always more questions to be answered. For managing contributions to the manual, GeoNode uses a process similar to that used for managing the code itself. The documentation is generated from source files in the docs/ directory within the GeoNode source repository. See http://sphinx.pocoo.org/ for more information on the documentation system GeoNode uses.
If you want to learn more about contributing to the documentation, please go ahead to the “How to contribute to GeoNode’s Documentation”. GeoNode also have some guidelines to help with writing once you are set up “How to write Documentation”.
Provide Translations¶
If GeoNode doesn’t provide a user interface in your native language, consider contributing a new translation. To get started here are the instructions “How to contribute to GeoNode’s Translation”.
Write Code¶
Of course since GeoNode is an open source project which encourages contributions of source code as well. If you are interested in making small changes, you can find an open ticket on http://github.com/GeoNode/geonode/issues, hack away, and get started on the “Patch Review Process”.
Further Reading¶
Contributing to Open Source on GitHub
Work With GitHub Issues and Pull Requests
Contributing to Open Source on GitHub¶
Warning
This section is freely adapted from the official GitHub guides.
A great way to get involved in open source is to contribute to the existing projects you’re using.
A Typical GitHub Project Structure¶
Projects often have a community around them, made up of other users in different (formal or informal) roles:
- Owner is the user or organization that created the project has the project on their account.
- Maintainers and Collaborators are the users primarily doing the work on a project and driving the direction. Oftentimes the owner and the maintainer are the same. They have write access to the repository.
- Contributors is everyone who has had a pull request merged into a project.
- Community Members are the users who often use and care deeply about the project and are active in discussions for features and pull requests.
Nearly all GitHub projects include a README.md file. The readme provides a lay of the land for a project with details on how to use, build and sometimes contribute to a project.
A LICENSE file, well, is the license for the project. An open source project’s license informs users what they can and can’t do (e.g., use, modify, redistribute), and contributors, what they are allowing others to do.
Many larger projects go beyond a readme to give instructions for how people can use their project. In such cases you’ll often find a link to another file or a folder named docs in the repository.

Alternatively, the repository may instead use the GitHub wiki to break down documentation.

Issues are a great way to keep track of tasks, enhancements, and bugs for your projects. They’re kind of like email—except they can be shared and discussed with the rest of your team. Most software projects have a bug tracker of some kind. GitHub’s tracker is called Issues, and has its own section in every repository.

For more information on how Issues work, see the section “Work With GitHub Issues and Pull Requests”
If you’re able to patch the bug or add the feature yourself, make a pull request with the code. Be sure you’ve read any documents on contributing, understand the license and have signed a CLA if required.
Once you’ve submitted a pull request the maintainer(s) can compare your branch to the existing one and decide whether or not to incorporate (pull in) your changes.
For more information on how Pull Requests work, see the section “Work With GitHub Issues and Pull Requests”
Work With GitHub Issues and Pull Requests¶
Warning
This section is freely adapted from the official GitHub guides.
Issues¶
An Issue is a note on a repository about something that needs attention. It could be a bug, a feature request, a question or lots of other things. On GitHub you can label, search and assign Issues, making managing an active project easier.
For example, let’s take a look at Bootstrap’s Issues section:

GitHub’s issue tracking is special because of our focus on collaboration, references, and excellent text formatting. A typical issue on GitHub looks a bit like this:

- A title and description describe what the issue is all about.
- Color-coded labels help you categorize and filter your issues (just like labels in email).
- A milestone acts like a container for issues. This is useful for associating issues with specific features or project phases (e.g. Weekly Sprint 9/5-9/16 or Shipping 1.0).
- One assignee is responsible for working on the issue at any given time.
- Comments allow anyone with access to the repository to provide feedback.
Click the Issues tab from the sidebar.
Click New Issue.
Give your Issue a title and description: Add a new Logo to GeoNode custom.
Click Submit new Issue when you’re done. Now this issue has a permanent home (URL) that you can reference even after it is closed.
- Check existing issues for your issue. Duplicating an issue is slower for both parties so search through open and closed issues to see if what you’re running into has been addressed already.
- Be clear about what your problem is: what was the expected outcome, what happened instead? Detail how someone else can recreate the problem.
- Link to demos recreating the problem on things like JSFiddle or CodePen.
- Include system details like what the browser, library or operating system you’re using and its version.
- Paste error output or logs in your issue or in a Gist. If pasting them in the issue, wrap it in three backticks:
```
so that it renders nicely.
Branching¶
Branching is the way to work on different parts of a repository at one time.
When you create a repository, by default it has one branch with the name master
. You could keep working on this branch and have only one, that’s fine. But if you have another feature or idea you want to work on, you can create another branch, starting from master
, so that you can leave master
in its working state.
When you create a branch, you’re making a copy of the original branch as it was at that point in time (like a photo snapshot). If the original branch changes while you’re working on your new branch, no worries, you can always pull in those updates.

At GeoNode developers use branches for keeping bug fixes and feature work separate from master
(production) branch. When a feature or fix is ready, the branch is merged into master through a Pull Request.
Go to the project folder and create a new branch
$ cd /home/geonode/geonode_custom/ $ sudo git branch add_logo $ sudo git checkout add_logo
Check that you are working on the correct branch:
add_logo
.$ cd /home/geonode/geonode_custom/ $ git branch
Push the new branch to GitHub.
$ cd /home/geonode/geonode_custom/ $ sudo git push origin add_logo
On GitHub, saved changes are called commits.
Each commit has an associated commit message, which is a description explaining why a particular change was made. Thanks to these messages, you and others can read through commits and understand what you’ve done and why.
Add a new logo to your custom GeoNode as described in the section Theming your GeoNode project
Stash the new files into the working project using
git add
$ cd /home/geonode/geonode_custom/ $ sudo git add geonode_custom/static $ git status
Commit the changes providing a commit messages and push them into your branch :
add_logo
.$ cd /home/geonode/geonode_custom/ $ sudo git commit -m "Adding a new logo to the custom GeoNode" $ sudo git push origin add_logo
Pull Requests¶
Pull Requests are the heart of collaboration on GitHub. When you make a pull request, you’re proposing your changes and requesting that someone pull in your contribution - aka merge them into their branch. GitHub’s Pull Request feature allows you to compare the content on two branches. The changes, additions and subtractions, are shown in green and red and called diffs (differences).
As soon as you make a change, you can open a Pull Request. People use Pull Requests to start a discussion about commits (code review) even before the code is finished. This way you can get feedback as you go or help when you’re stuck.
By using GitHub’s @mention system in your Pull Request message, you can ask for feedback from specific people or teams.
Click the Pull Request icon on the sidebar, then from the Pull Request page, click the green New pull request button.
Select the branch you made,
add_logo
, to compare withmaster
(the original).Look over your changes in the diffs on the Compare page, make sure they’re what you want to submit.
When you’re satisfied that these are the changes you want to submit, click the big green Create Pull Request button.
Give your pull request a title and since it relates directly to an open issue, include “fixes #” and the issue number in the title. Write a brief description of your changes.
When you’re done with your message, click Create pull request!
It’s time to bring your changes together – merge your add_logo
branch into the master
(the original) branch.
Click the green button to merge the changes into master. Click Confirm merge. Go ahead and delete the branch, since its changes have been incorporated, with the Delete branch button in the purple box.
If you revisit the issue you opened, it’s now closed! Because you included “fixes #1” in your Pull Request title, GitHub took care of closing that issue when the Pull Request was merged!
- Introduction to GeoNode development
- This module will introduce you to the components that GeoNode is built with, the standards that it supports and the services it provides based on those standards, and an overview its architecture.
- Django Overview
This section introduces some basic concepts of DJango, the Python based web framework on top of which GeoNode has been developed.
Django’s primary goal is to ease the creation of complex, database-driven websites. Django emphasizes reusability and “pluggability” of components, rapid development, and the principle of don’t repeat yourself. Python is used throughout, even for settings, files, and data models.
Django also provides an optional administrative create, read, update and delete interface that is generated dynamically through introspection and configured via admin models.
- Development Prerequsites and Core Modules
- This module will introduce you to the basic tools and skills required to start actively developing GeoNode.
- Install GeoNode for Development
This module shows a step-by-step guide for the setup of a GeoNode Develeopment Environent on an Ubuntu system.
For other Linux distributions the commands are similar, the difference is mainly on the packages names.
Note
For Windows: (Install GeoNode for Development (Windows))
- GeoNode debugging techniques
- GeoNode can be difficult to debug as there are several different components involved. This module shows some tecniques to debug the different parts of GeoNode.
- GeoNode APIs
- This module provides an overview of the core modules and libraries used by GeoNode and teach to the user how to use them through some guided examples.
- Testing in GeoNode
- This section explain how to run the tests on GeoNode.
- Contributing to GeoNode
- Basic concepts about GitHub OpenSource Projects and best practices.
Advanced Workshop¶
Welcome to the GeoNode Training Advanced Workshop documentation vlatest.
This module introduces advanced tecquinques and metodologies for the management of the geospatial data and the maintenance and tuning of the servers on Production Environments.
The last sections of the module will teach also you how to add brand new classes and functionalities to your GeoNode installation.
- Prerequisites
- You should be familiar with GeoNode, GeoServer, Python framework and development concepts other than with system administrator and caching concepts and tecnquiques.
Advanced Data Management and Processing¶
Adding Data to GeoServer¶
Managing GeoServer Data Directory¶
This section explain how to manage the GeoServer Data Directory.
What you will learn¶
In this section you will:
Generally if GeoServer is running in Web Archive mode inside of a servlet container, like in this Workshop, the data directory is located at
<web application root>/data
(the data directory contains the GeoServer configuration data).The first thing to do is to correctly configure the GEOSERVER_DATA_DIR. To increase the portability of their data and to facilitate updates GeoServer, in the default Workshop configuration the
GEOSERVER_DATA_DIR
is configured under the directory:${TRAINING_ROOT}/geoserver_data or %TRAINING_ROOT%\geoserver_data on Windows
Generally this is not an issue, but if you run the system from the LiveDVD this folder resides in memory. The first thing to do is to move this folder into a local persistent storage.
Move the
GEOSERVER_DATA_DIR
somewhere in the persistent storage using the command:sudo mv -f ${TRAINING_ROOT}/geoserver_data <TARGET_DIR>
Make a symbolic link to the
GEOSERVER_DATA_DIR
by issuing the command:ln -s <TARGET_DIR> ${TRAINING_ROOT}/geoserver_data
Warning
Check that the user
geosolutions
has permissions to read/write all the files/folder inside theGEOSERVER_DATA_DIR
.Note
Instead of creating a symbolic link you can configure GeoServer in order to allow it to point to the new
GEOSERVER_DATA_DIR
. To do that edit the file/opt/tomcat-geoserver/webapps/geoserver/WEB-INF/web.xml
and modify the context paramGEOSERVER_DATA_DIR
.
The following is the GEOSERVER_DATA_DIR structure:
data_directory/
coverages/
data/
demo/
gwc/
gwc-layers/
layergroups/
logs/
palettes/
security/
styles/
temp/
user_projections/
validation/
workspaces/
www/
global.xml
gwc-gs.xml
logging.xml
wcs.xml
wfs.xml
wms.xml
wps.xml
File | Description |
coverages |
Contains some demo raster layers for this training |
data |
Not to be confused with the GeoServer data directory itself, the data directory is a location where actual data can be stored. This directory is commonly used to store shapefiles and raster files but can be used for any data that is file based. The main benefit of storing data files inside of the data directory is portability. |
demo |
The demo directory contains files which define the sample requests available in the Sample Request Tool. |
gwc |
This directory holds the cache created by the embedded GeoWebCache service. |
gwc-layers |
This directory holds the configuration files created by the embedded GeoWebCache service for each layer. |
layergroups |
Contains information on the layer groups configurations. |
logs |
This directory contains the GeoServer logging files (log file and logging properties files). |
palettes |
The palettes directory is used to store pre-computed Image Palettes. Image palettes are used by the GeoServer WMS as way to reduce the size of produced images while maintaining image quality. |
security |
The security directory contains all the files used to configure the GeoServer security subsystem. This includes a set of property files which define access roles, along with the services and data each role is authorized to access. |
styles |
The styles directory contains a number of Styled Layer Descriptor (SLD) files which contain styling information used by the GeoServer WMS. |
temp |
Temporary directory, used by the WPS service. |
user_projections |
The user_projections directory contains extra spatial reference system definitions. The epsg.properties can be used to add new spatial reference systems, whilst the epsg_override.properties file can be used to override an official definition with a custom one. |
validation |
This directory contains the validation rules |
workspaces |
The various workspaces directories contain metadata about stores and layers which are published by GeoServer. Each layer will have a layer.xml file associated with it, as well as either a coverage.xml or a featuretype.xml file depending on whether it’s a raster or vector. |
www |
The www directory is used to allow GeoServer to act like a regular web server and serve regular files. While not a replacement for a full blown web server the www directory can be useful to easily serve OpenLayers map applications (this avoids the need to setup a proxy in order to respect the same origin policy). |
global.xml |
Contains settings that go across services, including contact information, JAI settings, character sets and verbosity. |
gwc-gs.xml |
Contains various settings for the GeoWebCache service. |
logging.xml |
Specifies the logging level, location, and whether it should log to std out. |
wcs.xml |
Contains the service metadata and various settings for the WCS service. |
wfs.xml |
Contains the service metadata and various settings for the WFS service. |
wms.xml |
Contains the service metadata and various settings for the WMS service. |
Adding base types¶
This section explain how to add some of the base data types into GeoServer. As an example we will learn how to insert a ShapeFile and GeoTIFF into GeoServer, as well as how to import a Shapefile into PostGIS and then publish it from there.
What you will learn¶
In this section you will:
The task of adding a Shapefile is one that is core to any GIS tool. This section covers the task of adding and publishing a Shapefile with GeoServer.
Navigate to the workshop directory
$TRAINING_ROOT/data/user_data/
(on Windows%TRAINING_ROOT%\data\user_data
) and find the following shapefiles:Mainrd.shp Mainrd.shx Mainrd.dbf Mainrd.prj
Copy the files to the following directory:
$GEOSERVER_DATA_DIR/data/boulder
for Windows:
%GEOSERVER_DATA_DIR%\data\boulder
Note
Ensure that all four parts of the shapefile are copied. This includes the
shp
,shx
,dbf
, andprj
extensions.Navigate to the GeoServer Welcome Page.
On the Welcome page locate the Login form located at the top right corner, and enter the username “admin” and password “Geos”.
Click the Add stores link.
Select the Shapefile link and click it.
Note
The new data source menu contains a list of all the spatial formats supported by GeoServer. When creating a new data store one of these formats must be chosen. Formats like Shapefile and PostGIS are supported by default, and many other formats are available as extensions.
On the Edit Vector Data Source page enter “Mainrd” in the Data Source Name and Description fields. Finally click on browse link in order to set the Shapefile location in the URL field and click Save.
Note
The Mainrd.shp got just copied in the data directory, inside the “data/boulder” folder, and the file browser opens right in the data directory, so just click on “data” and then “boulder” and you’ll find it
After saving, you will be taken to a page that lists all the layers in the shapefile and gives you the option to publish any of them. Click Publish.
The Coordinate Reference Systems have to be manually populated. The Name and Title fields should be automatically populated.
Scroll down the page and generate the bounds for the layer by clicking the Compute from data button in the Bounding Boxes section.
Scroll to the bottom of the page, notice the read only Feature Type Detail table and then click Save.
If all went well, you should see something like this:
At this point a shapefile has been added and is ready to be served by GeoServer.
Choose the
preview
link in the main menu and filter the layer list withmainrd
:Click on the
OpenLayers
link to preview the layer in an interactive viewer:The
mainrd
shapefile preview
In the next section we will see how to load a shapeFile into PostGIS.
This task shows how to load a ShapeFile into PostGIS database:
- Open the terminal window and enter the following command and press enter to creating a new database named ‘shape’:
Linux:
createdb -U postgres -T postgis20 shapeWindows:
setenv.bat createdb -U postgres -T postgis20 shape
- Enter the following command and press enter to load the ShapeFile into ‘shape’ database:
Linux:
shp2pgsql -I ${TRAINING_ROOT}/data/user_data/Mainrd.shp public.main_roads | psql -d shapeWindows:
shp2pgsql -I "%TRAINING_ROOT%\data\user_data\Mainrd.shp" public.main_roads | psql -U postgres -d shapeThe ShapeFile will be loaded within the ‘main_roads’ table of the ‘shape’ database. The following screenshot shows some of the table contents in
pgAdmin III
![]()
A PostGIS table by ShapeFile
In the next section we will see how to add a PostGIS layer into GeoServer.
This task shows how to add a PostGIS layer into GeoServer:
Navigate to the GeoServer Welcome Page.
If you are not already logged in, on the Welcome page locate the Login form located at the top right corner, and enter the username “admin” and password “Geos”.
Click the Add stores link.
Add stores link
Select the PostGIS link and click it.
Add new PostGIS Store
On the New Vector Data Source page fill the following parameter:
- Data source name, ‘shape’
- port, ‘5434’
- database, ‘shape’ the name of the database created in previous workshop step.
- user, ‘geosolutions’ the name of the user database owner.
- password, ‘Geos’ the user password.
and click Save.
Setting database connection parameters
After saving, you will be taken to a page that lists all the layers in the PostGIS database and gives you the option to publish any of them. Click Publish.
Publishing a layer from the PostGIS table
The Name and Title fields should be automatically populated. Fill the Declared SRS field to set the Coordinate Reference Systems and generate the bounds for the layer by clicking the Compute from data and Compute from native bounds buttons in the Bounding Boxes section
Populating fields and generating the layer bounding box
Scroll to the bottom of the page, notice the read only Feature Type Detail table and then click Save.
Submitting the layer configuration
If all went well, you should see something like this:
After a successful save
At this point the PostGIS layer has been added and is ready to be served by GeoServer. Use the layer preview to view its contents, filtering on the ‘main_road’ name.
The GeoTIFF is a widely used geospatial raster data format: it is composed of a single file containing both the data and the georeferencing information (not to be confused with the .tiff/.tfw/.prj file triplet, which is considered a “world image” file in GeoServer). This section provides instructions to add and publish a GeoTIFF file.
Open the web browser and navigate to the GeoServer Welcome Page.
Select Add stores from the interface.
Select GeoTIFF - Tagged Image File Format with Geographic information from the set of available Raster Data Sources.
Specify a proper name (as an instance,
13tde815295_200803_0x6000m_cl
) in the Data Source Name field of the interface.Click on browse link in order to set the GeoTIFF location in the URL field.
Note
The 13tde815295_200803_0x6000m_cl.tif is located at
$TRAINING_ROOT/data/user_data/aerial/13tde815295_200803_0x6000m_cl.tif
(on Windows%TRAINIG_ROOT%\data\user_data\aerial\13tde815295_200803_0x6000m_cl.tif
)Click Save.
Publish the layer by clicking on the publish link.
Check the Coordinate Reference Systems and the Bounding Boxes fields are properly set and click on Save.
At this point the GeoTIFF is being published with GeoServer. You can use the layer preview to inspect the data.
WMS cascading allows to expose layers coming from other WMS servers as if they were local layers. This provides for some interesting advantages:
- Clients connecting to your SDI need to care about less points of origin, which might be important for high security networks
- It is now possible to ask for maps in formats not supported by the original server, or to reproject the maps in projections not supported by the original server (GeoServer supports out of the box almost 5000 different coordinate reference systems)
- It is now possible to mix the layers with local ones to generate print oriented formats such as PDF
- It is now possible to provide more informations about the layer, such as a better description, more keywords, which will benefit all clients, in particular catalogues harvesting informations from your capabilities document
Configuration
The configuration as usage of the cascaded layers follows GeoServer traditional ease of use.
Open the web browser and navigate to the GeoServer Welcome Page.
Select Add stores from the interface.
Select WMS - Cascades a remote Web Map Service from the set of available Other Data Sources.
Specify a proper name (as an instance,
geoserver-enterprise
) in the Data Source Name field of the interface.Specify
http://demo1.geo-solutions.it/geoserver-enterprise/ows?service=wms&version=1.1.1&request=GetCapabilities
as the URL of the sample data in the Capabilities URL field.Click Save.
Publish the layer by clicking on the publish link near the GeoSolutions:ne_shaded layer name. Notice that you can also add more layers later.
Check the Coordinate Reference Systems and the Bounding Boxes fields are properly set and click on Save.
At this point the new WMS Layer is being published with GeoServer. You can use the layer preview to inspect the data.
GeoServer has the ability to load data from a remote Web Feature Server (WFS). This is useful if the remote WFS lacks certain functionality that GeoServer contains. For example, if the remote WFS is not also a Web Map Server (WMS), data from the WFS can be cascaded through GeoServer to utilize GeoServer’s WMS. If the remote WFS has a WMS but that WMS cannot output KML, data can be cascaded through GeoServer’s WMS to output KML.
Configuration
The configuration as usage of the cascaded layers follows GeoServer traditional ease of use.
Open the web browser and navigate to the GeoServer Welcome Page.
Select Add stores from the interface.
Select Web Feature Server from the set of available Vector Data Sources.
Specify a proper name (as an instance,
wfs-cascade
) in the Data Source Name field of the interface.Specify
http://demo1.geo-solutions.it/geoserver-enterprise/ows?service=wfs&version=1.0.0&request=GetCapabilities
as the URL of the sample data in the Capabilities URL field.Make sure that the HTTP Authentication fields match the remote server authorization you have on it (In this case the server is open so we don’t need them).
Click Save.
Publish the layer by clicking on the publish link near the geosolutions_country layer name. Notice that you can also add more layers later.
Check the Coordinate Reference Systems and the Bounding Boxes fields are properly set and click on Save.
At this point the new WMS Layer is being published with GeoServer. You can use the layer preview to inspect the data.
The traditional way to use database backed data is to configure either a table or a database view as a new layer in GeoServer. Starting with GeoServer 2.1.0 the user can also create a new layer by specifying a raw SQL query, without the need to actually creating a view in the database. The SQL can also be parametrized, and parameter values passed in along with a WMS or WFS request.
Creating a plain SQL view
In order to create an SQL view the administrator can go into the Add a new resource from the Layers page.
Upon selection of a database backed store a list of tables and views available for publication will appear, but at the bottom of if a new link, Configure new SQL view, will appear:
Selecting the link Configure new SQL view will open a new page where the SQL statement can be specified:
Plain SQL View configuration
SELECT st.obs_year, st.storm_num, st.storm_name, min(st.obs_datetime) AS storm_start, max(st.obs_datetime) AS storm_end, max(st.wind) AS max_wind, st_makeline(st.geom) AS the_route FROM ( SELECT storm_obs.storm_num, storm_obs.storm_name, storm_obs.wind, storm_obs.press, storm_obs.obs_datetime, date_part('year'::text, storm_obs.obs_datetime) AS obs_year, storm_obs.geom FROM storm_obs ORDER BY date_part('year'::text, storm_obs.obs_datetime), storm_obs.storm_num, storm_obs.obs_datetime) st GROUP BY st.obs_year, st.storm_num, st.storm_name ORDER BY st.obs_year, st.storm_num
Note
The query can be any SQL statement that can be validly executed as part of a subquery in the FROM clauses, that is
select * from (<the sql view>) [as] vtable
. This is true for most SQL statements, but specific syntax might be needed to call onto a stored procedure depending on the database. Also, all the columns returned by the SQL statement must have a name, in some databases aliasing is required when calling function namesOnce a valid SQL statement has been specified press the refresh link in the Attributes table to get a list of the feature type attributes:
Note
GeoServer will do its best to figure out automatically the geometry type and the native srid, but they should always be double checked and eventually corrected. In particular having the right SRID (spatial reference id) is key to have spatial queries actually work. In many spatial databases the SRID is equal to the EPSG code for the specific spatial reference system, but that is not always true (e.g., Oracle has a number of non EPSG SRID codes).
Specify a valid SRID.
Forcing manually 4326 SRID in this case
Note
If stable feature ids are desired for the view’s features one or more column providing a unique identification for the features should be checked in the
Indentifier
column. Always make sure those attributes generate a actually unique key, or filtering and WFS clients will mishbehave.Once the query and the attribute details are set press Save and the usual new layer configuration page will show up. That page will have a link to a SQL view editor at the bottom of the
Data
tab:Make sure the CRS is
EPSG:4326
and write manually(-180,-90,180,90)
values in the Bounding Boxes sections.Click Save.
At this point the new WMS Layer is being published with GeoServer.
Creating a parametric SQL view
Warning
As a rule of thumb use SQL parameter substitution only if the required functionality cannot be obtained with safer means, such as dynamic filtering (CQL filters) or SLD parameter substitution. Only use SQL parameters as a last resort, improperly validated parameters can open the door to SQL injection attacks.
A parametric SQL view is based on a SQL query containing parameters whose values can be dinamically provided along WMS or WFS requests. A parameter is bound by % signs, can have a default value, and should always have a validation regular expression.
In order to create a parametric SQL view performs the steps 1 and 2 like before and then insert the following parameters:
Parametric SQL View configuration
SELECT date_part('year'::text, t1.obs_datetime) AS obs_year, t1.storm_num, t1.storm_name, t1.wind, t2.wind AS wind_end, t1.press, t2.press AS press_end, t1.obs_datetime, t2.obs_datetime AS obs_datetime_end, st_makeline(t1.geom, t2.geom) AS geom FROM storm_obs t1 JOIN ( SELECT storm_obs.id, storm_obs.storm_num, storm_obs.storm_name, storm_obs.wind, storm_obs.press, storm_obs.obs_datetime, storm_obs.geom FROM storm_obs) t2 ON (t1.obs_datetime + '06:00:00'::interval) = t2.obs_datetime AND t1.storm_name::text = t2.storm_name::text WHERE date_part('year'::text, t1.obs_datetime) BETWEEN %MIN_OBS_YEAR% AND %MAX_OBS_YEAR% ORDER BY date_part('year'::text, t1.obs_datetime), t1.storm_num, t1.obs_datetime
Note
The query defines two parameters
%MIN_OBS_YEAR%
and%MAX_OBS_YEAR%
.Click on the Guess parameters from SQL. GeoServer will automatically create fields with the parameters spcified in the view:
Note
Always provide default values for each parameter in order to let the layer work properly and also be sure the regular expression for the values validation are correct.
Examples of Regular Expressions:
^[\d\.\+-eE]+$
will check that the parameter value is composed with valid elements for a floating point number, eventually in scientific notation, but will not check that the provided value is actually a valid floating point number[^;']+
will check the parameter value does not contain quotes or semicolumn, preventing common sql injection attacks, without actually imposing much on the parameter value structure
Fill in some default values for the parameters, so that GeoServer can run the query and inspect the results in the next steps. Set
MAX_OBS_YEAR
to 2020 andMIN_OBS_YEAR
to 0.Refresh the attributes, check the Geometry SRID and publish the layer like before. Also assign the
storm_track_interval
style to the layer as Default Style.Click on the OpenLayers on the Layer Preview list for v_storm_track_interval layer.
At a first glance you won’t see anything since the layer is using the default parameters for the observation years. Specify two years for the view adding this parameter at the end of the GetMap Request:
&viewparams=MIN_OBS_YEAR:2000;MAX_OBS_YEAR:2000
You should obtain a request like this:
http://localhost:8083/geoserver/geosolutions/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:v_storm_track_interval&styles=&bbox=-180.0,-90.0,180.0,90.0&width=660&height=330&srs=EPSG:4326&format=application/openlayers&viewparams=MIN_OBS_YEAR:2000;MAX_OBS_YEAR:2000
Now you are able to see the hurricanes from the parametric view and also dynamically choose the observation years interval of interest.
![]()
Parametric SQL View OL preview
This section covers the task of adding and publishing a ImageMosaic file with GeoServer.
Navigate to the GeoServer Welcome Page.
On the Welcome page locate the Login form located at the top right corner, and enter the username “admin” and password “Geos”.
Click the Add stores link.
Add stores link
Select the ImageMosaic link and click it.
Add a new Image Mosaic
On the Add Raster Data Source page enter
$TRAINING_ROOT/data/user_data/aerial
(on Windows%TRAINING_ROOT%\data\user_data\aerial\
) in the URL field (or browse the filesystem clicking on Browse), “boulder_bg” in the Data Source Name and Description fields, and click Save.Specifying store parameters
After saving, you will be taken to a page that lists all the layers in the store and gives you the option to publish any of them. Click Publish.
Publishing a layer from the store
The Coordinate Reference Systems should be automatically populated, as well as the Name, Title and Bounding Boxes fields.
Note
Change the Name and Title into boulder_bg as shown in the figure.
The CRS and BBox fields are auto-filled with information taken from the underlying files. The coverage options section is filled with default parameters (which will be discussed later on in the training).
The coverage layer gui for the boulder_bg layer.
Scroll to the bottom of the page and then click Save. If all went well you should see something like this:
After a successful save.
In the Layer Preview section click on the OpenLayers link to preview the layer in an interactive viewer, filtering by boulder_bg name:
Mosaic preview.
In case the GDAL libraries are available, it is possible to access to several GDAL’s supported data formats. Actually, the available GDAL plugins allow to support DTED, EHdr, ERDASImg, MrSID, JP2K (via MrSID Driver) and NITF data formats. Moreover, in case a valid license have been purchased and the proper native library is available, also ECW, JP2K (via ECW) and JP2K (via Kakadu) are supported. This section provides instructions to add and publish a MrSID, ECW and JP2K datasets.
Warning
This assumes the GeoServer image GDAL plug-in is already installed. The GDAL plugin is normally an extension.
If the store described in this section are not avaiable, install the ‘geoserver-2.2-SNAPSHOT-gdal-plugin’ from%TRAINING_ROOT%\data\plugins\
. Just decompress the zip file into%TRAINING_ROOT%\webapps\geoserver\WEB-INF\lib\
and restart GeoServer.
Open the web browser and navigate to the GeoServer Welcome Page.
Select Add stores from the interface.
Select MrSID - MrSID Coverage Format from the set of available Raster Data Sources.
Specify a proper name (as an instance,
c3008957_nes_20
) in the Data Source Name field of the interface.Specify
file:%TRAINING_ROOT%/data/user_data/c3008957_nes_20/c3008957_nes_20.sid
as URL of the sample data in the Connections Parameter’s - URL field. (replace %TRAINING_ROOT% with your current training root directory)Click Save.
Assign a proper layername (e.g
c3008957_nes_20
) then publish the layer by clicking on the publish link.click on Save when done.
At this point the MrSID data is being published with GeoServer.
Click the Layer Preview link in the left GeoServer menu.
Look for a geosolutions:c3008957_nes_20 layer and click the OpenLayers link beside of it.
Warning
Attention, you need a license in order to use ECW data sets. Here we are using a free distributed ECW file only for demonstration.
ECW (Enhanced Compression Wavelet) is a proprietary wavelet compression image format optimized for aerial and satellite imagery.
Open the web browser and navigate to the GeoServer Welcome Page.
Select Add stores from the interface.
Select ECW - ECW Coverage Format from the set of available Raster Data Sources.
Specify a proper name (as an instance,
TerraColor_Sydney_AU_15m
) in the Data Source Name field of the interface.Specify
file:%TRAINING_ROOT%/data/user_data/tc_sydney_au_ecw/TerraColor_Sydney_AU_15m.ecw
as URL of the sample data in the Connections Parameter’s - URL field (replace %TRAINING_ROOT% with your current training root directory).Click Save.
Assign a proper layername (e.g
TerraColor_Sydney_AU_15m
) then publish the layer by clicking on the publish link.
At this point the ECW data is being published with GeoServer.
Click the Layer Preview link in the left GeoServer menu.
Look for a geosolutions:TerraColor_Sydney_AU_15m layer and click the OpenLayers link beside of it.
JPEG 2000 is a image coding system that uses state-of-the-art compression techniques based on wavelet technology.
Open the web browser and navigate to the GeoServer Welcome Page.
Select Add stores from the interface.
Select JP2ECW - JP2 (ECW) Coverage Format from the set of available Raster Data Sources.
Note
We used JP2ECW - JP2 (ECW) Coverage Format because JP2MrSID - JP2 (MrSID) Coverage Format is not fully stable, and may not work properly especially with several Linux distributions.
Specify a proper name (as an instance,
TerraColor_Sydney_AU_15m_JP2K
) in the Data Source Name field of the interface.Specify
file:%TRAINING_ROOT%/data/user_data/tc_sydney_au_jp2/TerraColor_Sydney_AU_15m.jp2
as URL of the sample data in the Connections Parameter’s - URL field. (replace %TRAINING_ROOT% with your current training root directory)Click Save.
Assign a proper layername (e.g
TerraColor_Sydney_AU_15m_JP2K
) then publish the layer by clicking on the publish link.
At this point the JP2K data is being published with GeoServer.
- Click the Layer Preview link in the left GeoServer menu.
- Look for a geosolutions:TerraColor_Sydney_AU_15m_JP2K layer and click the OpenLayers link beside of it.
Pretty maps with GeoServer¶
This module describes how to manage the GeoServer maps visualization. Will be discussed all those aspects which relate styles, decorations, Layer Groups and other interesting GeoServer features affecting the WMS protocol.
In this module you will:
Styling with SLD¶
This section introduces the concepts of the Styled Layer Descriptor (SLD) markup language. SLD is the styling engine used by GeoServer, and how all WMS portrayal is specified.
What you will learn¶
In this section you will:
The most important function of a web map server is the ability to style and render data. This section covers the task of adding a new style to GeoServer and configuring the default style for a particular layer.
From the GeoServer Welcome Page navigate to Style.
Navigating to Style configuration
Click New
Adding a new style
Enter “mainrd” in the Name field and notice the file upload dialogue SLD file.
Specifying style name and populating from a file.
Navigate to the workshop (on Linux)
$TRAINING_ROOT/data/user_data/
directory (on Windows%TRAINING_ROOT%\data\user_data\
), select thefoss4g_mainrd.sld
file, and click Upload.Note
In GeoServer, styles are represented via SLD (Styled Layer Descriptor) documents. SLD is an XML format for specifying the symbolization of a layer. When an SLD document is uploaded the contents are shown in the text editor. The editor can be used to edit the contents of the SLD directly.
Add the new style by clicking Submit. Once it’s save, you should see something like this:
Submitting style
After having created the style, it’s time to apply it to a vector layer. Click on the Layers link.
Navigating to Layers
Select the “Mainrd” on the Layers page.
Selecting a layer
Select the Publish tab.
Publish tab
Assign the new created style “mainrd” as the default style.
Publish tab
Warning
Many new users mistake the Available Styles for the Default Style, please take into account that they are different, the default one allows that style to be used implicitly when no style is specified in a map request, while the available ones are just optional compatible styles.
Note
Geoserver 2.x assigns a default style depending on the geometry of the objects and the type, for example: line, poly, raster, point.
Scroll to the bottom of the page and hit Save.
Use the map preview to show how the style, please note you’ll have to zoom in once to show the data due to the map scale filters (
MaxScaleDenominator
directive in the SLD).
In previous modules the style for a layer was configured by uploading an existing SLD. In this section the task of creating a new SLD document from scratch will be covered. In particular we are going to create some styles that can be applied to vectorial datasets, in the first case by drawing patterns and dash arrays to polygons and lines and in the second case drawing roads and labels to lines.
In this section you will:
From the GeoServer Welcome Page navigate to Style.
Navigating to Style configuration
From the style list select the citylimits style
The styles list
Inside the Style Editor we have the following style:
<?xml version="1.0" encoding="UTF-8"?> <sld:StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:sld="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:gml="http://www.opengis.net/gml" version="1.0.0"> <sld:UserLayer> <sld:LayerFeatureConstraints> <sld:FeatureTypeConstraint/> </sld:LayerFeatureConstraints> <sld:UserStyle> <sld:Name>BoulderCityLimits</sld:Name> <sld:Title/> <sld:IsDefault>1</sld:IsDefault> <sld:FeatureTypeStyle> <sld:Name>group 0</sld:Name> <sld:FeatureTypeName>Feature</sld:FeatureTypeName> <sld:SemanticTypeIdentifier>generic:geometry</sld:SemanticTypeIdentifier> <sld:SemanticTypeIdentifier>simple</sld:SemanticTypeIdentifier> <sld:Rule> <sld:Name>Filled</sld:Name> <sld:MinScaleDenominator>75000</sld:MinScaleDenominator> <sld:PolygonSymbolizer> <sld:Fill> <sld:CssParameter name="fill">#7F7F7F</sld:CssParameter> <sld:CssParameter name="fill-opacity">0.5</sld:CssParameter> </sld:Fill> <sld:Stroke> <sld:CssParameter name="stroke">#7F7F7F</sld:CssParameter> <sld:CssParameter name="stroke-opacity">0.5</sld:CssParameter> <sld:CssParameter name="stroke-width">2.0</sld:CssParameter> </sld:Stroke> </sld:PolygonSymbolizer> <sld:TextSymbolizer> <sld:Label> <ogc:Literal>Boulder</ogc:Literal> </sld:Label> <sld:Font> <sld:CssParameter name="font-family">Arial</sld:CssParameter> <sld:CssParameter name="font-size">14.0</sld:CssParameter> <sld:CssParameter name="font-style">normal</sld:CssParameter> <sld:CssParameter name="font-weight">normal</sld:CssParameter> </sld:Font> <sld:LabelPlacement> <sld:PointPlacement> <sld:AnchorPoint> <sld:AnchorPointX> <ogc:Literal>0.0</ogc:Literal> </sld:AnchorPointX> <sld:AnchorPointY> <ogc:Literal>0.5</ogc:Literal> </sld:AnchorPointY> </sld:AnchorPoint> <sld:Rotation> <ogc:Literal>0</ogc:Literal> </sld:Rotation> </sld:PointPlacement> </sld:LabelPlacement> <sld:Fill> <sld:CssParameter name="fill">#000000</sld:CssParameter> </sld:Fill> <sld:VendorOption name="maxDisplacement">200</sld:VendorOption> <sld:VendorOption name="Group">true</sld:VendorOption> </sld:TextSymbolizer> </sld:Rule> </sld:FeatureTypeStyle> </sld:UserStyle> </sld:UserLayer> </sld:StyledLayerDescriptor>
Note
The most important section are:
The
<Rule>
tag combines a number of symbolizers (we have also the possibility to define the OGC filter) to define the portrayal of a feature.The
<PolygonSymbolizer>
styles polygons and contain styling information about their border (stroke) and their fill.The
<TextSymbolizer >
specifies text labels and their style:<Label>
Specifies the content of the text label<Font>
Specifies the font information for the labels.<LabelPlacement>
Sets the position of the label relative its associate feature.<Fill>
Determines the fill color of the text label.- VendorOption
maxDisplacement
Controls the displacement of the label along a line. Normally GeoServer would label a polygon in its centroid, provided the location is not busy with another label and that the label is not too big compare to the polygon, or not label it at all otherwise. When the maxDisplacement is set, the labeller will search for another location within maxDisplacement pixels from the pre-computed label point. - VendorOption
Group
Sometimes you will have a set of related features that you only want a single label for. The grouping option groups all features with the same label text, then finds a representative geometry for the group.
The
<MaxScaleDenominator>
and<MinScaleDenominator>
are used to apply a particular SLD rule to a specific scale. The above SLD makes sure that the Boulder border disappear once we zoom in enough to see the city details. An alternative approach could be to keep the layer showing, but switch it to a different style, for example a think red line, so that the details of the city are not disturbed by the polygon fill.
Now from the style list select the rivers style.
Inside the Style Editor we have the following style:
<?xml version="1.0" encoding="UTF-8"?> <sld:StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:sld="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:gml="http://www.opengis.net/gml" version="1.0.0"> <sld:UserLayer> <sld:LayerFeatureConstraints> <sld:FeatureTypeConstraint/> </sld:LayerFeatureConstraints> <sld:UserStyle> <sld:Name>Hydrology Line</sld:Name> <sld:Title/> <sld:FeatureTypeStyle> <sld:Rule> <sld:Name>default rule</sld:Name> <sld:MaxScaleDenominator>75000</sld:MaxScaleDenominator> <sld:LineSymbolizer> <sld:Stroke> <sld:CssParameter name="stroke-width">0.5</sld:CssParameter> <sld:CssParameter name="stroke">#06607F</sld:CssParameter> </sld:Stroke> </sld:LineSymbolizer> </sld:Rule> </sld:FeatureTypeStyle> </sld:UserStyle> </sld:UserLayer> </sld:StyledLayerDescriptor>
Note
This is a very simple Line style. Take into account the LineSymbolizer that styles lines. Lines are one-dimensional geometry elements that contain position and length. Lines can be comprised of multiple line segments.
The outermost tag is the <Stroke> tag. This tag is required, and determines the visualization of the line:
stroke
Specifies the solid color given to the line, in the form #RRGGBB. Default is black (#000000
).stroke-width
Specifies the width of the line in pixels. Default is1
.
In this case
MaxScaleDenominator
is used to make sure that the rivers start showing up when we are zoomed in enough, and in particular as the city borders disappear
From the GeoServer Welcome Page navigate to Style.
Navigating to Style configuration
Click New
Adding a new style
Enter “landmarks” in the Name field.
Creating a new style
In the SLD Editor enter the following XML:
<StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd"> <NamedLayer> <Name>landmarks</Name> <UserStyle> <Name>landmarks</Name> <Title>Point Landmarks</Title> <FeatureTypeStyle> <Rule> <Name>default</Name> <Title>Landmarks</Title> <PointSymbolizer> <Graphic> <Mark> <WellKnownName>triangle</WellKnownName> <Fill> <CssParameter name="fill">#009900</CssParameter> <CssParameter name="fill-opacity">0.2</CssParameter> </Fill> <Stroke> <CssParameter name="stroke">#000000</CssParameter> <CssParameter name="stroke-width">2</CssParameter> </Stroke> </Mark> <Size>12</Size> </Graphic> </PointSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>
Note
Take into account:
WellKnownName
The name of the common shape. Options are circle, square, triangle, star, cross, or x. Default is square.fill
Specifies how the symbolizer should be filled. Options are a<CssParameter name="fill">
specifying a color in the form#RRGGBB
, or<GraphicFill>
for a fill made with a repeated graphic.fill-opacity
Determines the opacity (transparency) of symbolizers. Values range from0
(completely transparent) to1
(completely opaque). Default is1
.
Then click Save button.
Open the
geosolutions:bptlandmarks
vector layer, but this time associate the style as a “Additional Style”:Open the Layers Preview
Click on the Save button.
Preview the
geosolutions:bptlandmarks
layer, which with the default style should be empty due to scale dependencies. Then click the option button at the top left of the map and select thelandmarks
style in the style drop down:Open the Layers Preview
Go and edit the configuration of the
bplandmarks
layer, enter the publish tab and associate thecemetery_mark
andcemetery_graphics
styles as “Additional styles” for the layer, then press “Save”From the Welcome Page navigate to .
Note
You have to be logged in as Administrator in order to activate this function.
Select “cemetery_graphics” from the list
Patterns filling SLD
In the SLD Editor you will see the following XML:
<?xml version="1.0" encoding="UTF-8"?> <sld:StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:sld="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:gml="http://www.opengis.net/gml" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0.0"> <sld:UserLayer> <sld:UserStyle> <sld:Name>tl 2010 08013 arealm</sld:Name> <sld:Title/> <sld:FeatureTypeStyle> <sld:Rule> <sld:Name>cemeteries</sld:Name> <ogc:Filter> <ogc:PropertyIsEqualTo> <ogc:PropertyName>MTFCC</ogc:PropertyName> <ogc:Literal>K2582</ogc:Literal> </ogc:PropertyIsEqualTo> </ogc:Filter> <sld:MaxScaleDenominator>500000.0</sld:MaxScaleDenominator> <sld:PolygonSymbolizer> <sld:Fill> <sld:GraphicFill> <sld:Graphic> <sld:ExternalGraphic> <sld:OnlineResource xlink:type="simple" xlink:href="./img/landmarks/area/grave_yard.png" /> <sld:Format>image/png</sld:Format> </sld:ExternalGraphic> </sld:Graphic> </sld:GraphicFill> </sld:Fill> </sld:PolygonSymbolizer> </sld:Rule> </sld:FeatureTypeStyle> </sld:UserStyle> </sld:UserLayer> </sld:StyledLayerDescriptor>
Filling with patterns
Note
The above SLD defines a
<PolygonSymbolizer>
with a<GraphicFill>
pointing to a png ./img/landmarks/area/grave_yard.png in the GeoServer data directory, which will be used by GeoServer as pattern to fill the polygon.Like before, select now “cemetery_mark” from the list
True Type Font filling SLD
In the SLD Editor you will see the following XML:
<?xml version="1.0" encoding="UTF-8"?> <sld:StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:sld="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:gml="http://www.opengis.net/gml" xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0.0"> <sld:UserLayer> <sld:Name>cemeteries</sld:Name> <sld:UserStyle> <sld:Name>tl 2010 08013 arealm</sld:Name> <sld:Title/> <sld:FeatureTypeStyle> <sld:Rule> <sld:Name>cemeteries</sld:Name> <ogc:Filter> <ogc:PropertyIsEqualTo> <ogc:PropertyName>MTFCC</ogc:PropertyName> <ogc:Literal>K2582</ogc:Literal> </ogc:PropertyIsEqualTo> </ogc:Filter> <sld:MaxScaleDenominator>500000.0</sld:MaxScaleDenominator> <sld:PolygonSymbolizer> <sld:Fill> <sld:CssParameter name="fill">#D3FFD3</sld:CssParameter> <sld:CssParameter name="fill-opacity">0.5</sld:CssParameter> </sld:Fill> <sld:Stroke> <sld:CssParameter name="stroke">#6DB26D</sld:CssParameter> </sld:Stroke> </sld:PolygonSymbolizer> <sld:PolygonSymbolizer> <sld:Fill> <sld:GraphicFill> <sld:Graphic> <sld:Mark> <sld:WellKnownName>ttf://Wingdings#0x0055</sld:WellKnownName> <sld:Stroke> <sld:CssParameter name="stroke">#6DB26D</sld:CssParameter> </sld:Stroke> </sld:Mark> <sld:Size>16</sld:Size> </sld:Graphic> </sld:GraphicFill> </sld:Fill> <sld:VendorOption name="graphic-margin">8</sld:VendorOption> </sld:PolygonSymbolizer> </sld:Rule> </sld:FeatureTypeStyle> </sld:UserStyle> </sld:UserLayer> </sld:StyledLayerDescriptor>
Filling with TTF fonts
Note
The above SLD defines a
<PolygonSymbolizer>
with a<GraphicFill>
looking for a specific Windings character which will be used by GeoServer as pattern to fill the polygon. Thegraphic-margin
VendorOption
is used to add some space around symbols.Lets now take a look at another way to fill polygons using patterns, the Hatches. From the Welcome Page navigate to and select “wetlands” from the list.
Note
You may switch to the second page in order to find the style.
Wetlands style with some hatches
<?xml version="1.0" encoding="UTF-8"?> <sld:StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:sld="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:gml="http://www.opengis.net/gml" version="1.0.0"> <sld:UserLayer> <sld:LayerFeatureConstraints> <sld:FeatureTypeConstraint/> </sld:LayerFeatureConstraints> <sld:UserStyle> <sld:Name>Wetlands regulatory area</sld:Name> <sld:Title/> <sld:FeatureTypeStyle> <sld:Rule> <sld:Name>default rule</sld:Name> <sld:MaxScaleDenominator>10000.0</sld:MaxScaleDenominator> <sld:PolygonSymbolizer> <sld:Fill> <sld:GraphicFill> <sld:Graphic> <sld:Mark> <sld:WellKnownName>shape://times</sld:WellKnownName> <sld:Fill/> <sld:Stroke> <sld:CssParameter name="stroke">#ADD8E6</sld:CssParameter> <sld:CssParameter name="stroke-width">1.0</sld:CssParameter> </sld:Stroke> </sld:Mark> <sld:Size> <ogc:Literal>8.0</ogc:Literal> </sld:Size> </sld:Graphic> </sld:GraphicFill> <!-- <sld:CssParameter name="fill">#7CE3F8</sld:CssParameter> <sld:CssParameter name="fill-opacity">0.5</sld:CssParameter> --> </sld:Fill> </sld:PolygonSymbolizer> </sld:Rule> </sld:FeatureTypeStyle> </sld:UserStyle> </sld:UserLayer> </sld:StyledLayerDescriptor>
Comment out the following line in order to see the polygons at lower zoom levels too:
<!-- sld:MaxScaleDenominator>10000.0</sld:MaxScaleDenominator -->
Click Submit to add the new SLD.
To see how the styles work, make sure the default style of the Wetlands_regulatory_area feature type is set to wetlands.
Changing the default style of the Wetlands_regulatory_area feature type to wetlands
Use the Map Preview to preview the new style.
Previewing the bplandmarks layer with the hatches applied
On the previous example we used times as hatches mark. GeoServer makes available different kinds of hatches marks:
Different types of hatches marks.
Lets now familiarize a bit with Dashes. We are going to see how it’s possible to draw several kind of dashes to represent different types of trails or roads.
From the Welcome Page navigate to .
Note
You have to be logged in as Administrator in order to activate this function.
Select “trails” from the list
Dashes SLD
In the SLD Editor you will see the following XML:
<?xml version="1.0" encoding="UTF-8"?> <sld:StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:sld="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:gml="http://www.opengis.net/gml" version="1.0.0"> <sld:UserLayer> <sld:LayerFeatureConstraints> <sld:FeatureTypeConstraint/> </sld:LayerFeatureConstraints> <sld:UserStyle> <sld:Name>Trails</sld:Name> <sld:Title/> <sld:FeatureTypeStyle> <sld:Rule> <sld:MaxScaleDenominator>75000</sld:MaxScaleDenominator> <sld:LineSymbolizer> <sld:Stroke> <sld:CssParameter name="stroke">#6B4900</sld:CssParameter> <sld:CssParameter name="stroke-width">0.1</sld:CssParameter> <sld:CssParameter name="stroke-dasharray">2.0 </sld:CssParameter> </sld:Stroke> </sld:LineSymbolizer> </sld:Rule> </sld:FeatureTypeStyle> </sld:UserStyle> </sld:UserLayer> </sld:StyledLayerDescriptor>
Simple dash-array
Note
The above SLD defines a
<LineSymbolizer>
with a<Stroke>
using the CSS property stroke-dasharray to represent the trails like a simle gray dash.Note
Encodes a dash pattern as a series of numbers separated by spaces. Odd-indexed numbers (first, third, etc) determine the length in pxiels to draw the line, and even-indexed numbers (second, fourth, etc) determine the length in pixels to blank out the line. Default is an unbroken line. Starting from version 2.1 dash arrays can be combined with graphic strokes to generate complex line styles with alternating symbols or a mix of lines and symbols.
The Style above is the default one for the layer geosolutions:Trails. Lets have a look at a bit more complex example. From the Welcome Page navigate to and select “trails2” from the list
Trails2 Style
In the SLD Editor you will see the following XML:
<?xml version="1.0" encoding="UTF-8"?> <sld:StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:sld="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:gml="http://www.opengis.net/gml" version="1.0.0"> <sld:UserLayer> <sld:LayerFeatureConstraints> <sld:FeatureTypeConstraint/> </sld:LayerFeatureConstraints> <sld:UserStyle> <sld:Name>Trails</sld:Name> <sld:Title/> <sld:FeatureTypeStyle> <sld:Rule> <sld:MaxScaleDenominator>75000</sld:MaxScaleDenominator> <sld:LineSymbolizer> <sld:Stroke> <sld:GraphicStroke> <sld:Graphic> <sld:Mark> <sld:WellKnownName>circle</sld:WellKnownName> <sld:Fill> <sld:CssParameter name="fill">#AA0000</sld:CssParameter> </sld:Fill> </sld:Mark> <sld:Size> <ogc:Literal>6</ogc:Literal> </sld:Size> </sld:Graphic> </sld:GraphicStroke> <sld:CssParameter name="stroke-dasharray">6 18</sld:CssParameter> </sld:Stroke> </sld:LineSymbolizer> <sld:LineSymbolizer> <sld:Stroke> <sld:CssParameter name="stroke">#AA0000</sld:CssParameter> <sld:CssParameter name="stroke-dasharray">10 14</sld:CssParameter> <sld:CssParameter name="stroke-dashoffset">14</sld:CssParameter> </sld:Stroke> </sld:LineSymbolizer> </sld:Rule> </sld:FeatureTypeStyle> </sld:UserStyle> </sld:UserLayer> </sld:StyledLayerDescriptor>
Note
We may notice two interesting things in this style, two
<LineSymbolizer>
the first one defining a circle Mark with a simple dasharray and the second one a simple stroke defining also a dashoffset. The latter specifies the distance in pixels into the dasharray pattern at which to start drawing. Default is 0.Open the geosolutions:Trails layers and add trails2 as an additional style, then go to the Layer Preview to see it in action
Warning
You have to zoom in from the layer preview in order to see the lines due to the MaxScaleDenominator
From the Welcome Page navigate to in order to edit the mainrd SLD.
Note
You have to be logged in as Administrator in order to activate this function.
In the SLD Editor find the sld:TextSymbolizer associated to the ogc:PropertyName LABEL_NAME
Road style
Note
The style defines a
<Font>
and an<Halo>
in order to render the value of the property LABEL_NAME for that layer. The interesting part is at the bottom where several<VendorOption>
are specified. Those options are GeoServer specific and allows us to have better and nicer result by tweaking the label renderer behaviour.
Option | Description | Type |
followLine | The followLine option forces a label to follow the curve of the line. <VendorOption name="followLine">true</VendorOption>
To use this option place the following in your <LabelPlacement>
<LinePlacement/>
</LabelPlacement>
|
boolean |
repeat | The repeat option determines how often GeoServer labels a line. Normally GeoServer would label each line only once, regardless of their length. Specify a positive value to make it draw the label every repeat pixels. <VendorOption name="repeat">100</VendorOption>
|
number |
group | Sometimes you will have a set of related features that you only want a single label for. The grouping option groups all features with the same label text, then finds a representative geometry for the group. Roads data is an obvious example - you only want a single label for all of ![]() When the grouping option is off (default), grouping is not performed and each geometry is labelled (space permitting). ![]() With the grouping option on, all the geometries with the same label are grouped together and the label position is determined from ALL the geometries.
<VendorOption name="group">yes</VendorOption>
Warning Watch out - you could group together two sets of features by accident. For example, you could create a single group for |
enum{yes/no} |
maxDisplacement | The maxDisplacement option controls the displacement of the label along a line. Normally GeoServer would label a line at its center point only, provided the location is not busy with another label, and not label it at all otherwise. When set, the labeller will search for another location within maxDisplacement pixels from the pre-computed label point. When used in conjunction with repeat, the value for maxDisplacement should always be lower than the value for repeat. <VendorOption name="maxDisplacement">10</VendorOption>
|
number |
Another important thing to notice in this style is the road casing, that is, the fact each road segment is painted by two overlapping strokes of different color and size.
Placing the strokes in the two separate feature type styles is crucial:
- with the symbolizers in two separate FeatureTypeStyle element all roads are painted with the large stroke, and then again with the thin, lighter one.
- if instead the two symbolizers were placed in the same FeatureTypeStyle element the result would be different, and not pleasing to see, since the renderer would take the first road, paint with the large and thin strokes in sequence, then move to the next one and repeat until the end
![]()
Road casing with a single FeatureTypeStyle element
Point data in SLD can be depicted with PointSymbolizer
and labelled with TextSymbolizer
.
This section describe an existing, realistic style, available in the data directory that depicts the point landmarks layer (bptlandmarks
) with icons and labels.
The bptlandmarks
layer (Boulder point landmarks) contains the location of significant point entities such as malls, schools, airports and the like. The attribute structure is reported in the GeoServer page for such layer:

Point landmarks attribute structure
The style will use the MTFCC
code to categorize the various points in the different types (e.g., schools have MTFCC = K2543
, and eventually use FULLNAME
for the label.
This results in the following map:

Point landmarks in Boulder
The complete style we’ll be referring to is named point_landmark
, you can have a look at the full style in the GeoServer style editor:

Point landmarks style
A point symbolizer depicts a symbol by means of a Mark
or a External Graphic
.
The former is a built-in vector symbol that can be stroked and filled at the styler will, but only a handful of such symbols are available, whilst the latter can be a user provided image or SVG graphic.
The point landmark styles use the Open Street Map icons for most of the locations. The images have been added inside the data directory, inside styles/im
, since this allows to refer them by relative path:

Point landmarks style
Given the above symbols a point symbolizer looks as follows:
<sld:PointSymbolizer>
<sld:Graphic>
<sld:ExternalGraphic>
<sld:OnlineResource xlink:type="simple" xlink:href="./img/landmarks/school.png" />
<sld:Format>image/png</sld:Format>
</sld:ExternalGraphic>
</sld:Graphic>
<VendorOption name="labelObstacle">true</VendorOption>
</sld:PointSymbolizer>
The icon is depicted on the screen as-is, at its natural resolutions.
The labelObstacle
vendor parameter, specific to GeoServer, makes sure the point is icon is treated as a label obstacle, that is, makes sure no label will ever be depicted over the point.
The text symbolizer associates a label with a point using an attribute value as the label source. The following symbolizer is used to label schools:
<sld:TextSymbolizer>
<sld:Label>
<ogc:PropertyName>FULLNAME</ogc:PropertyName>
</sld:Label>
<sld:Font>
<sld:CssParameter name="font-family">Arial</sld:CssParameter>
<sld:CssParameter name="font-size">12.0</sld:CssParameter>
<sld:CssParameter name="font-style">normal</sld:CssParameter>
<sld:CssParameter name="font-weight">normal</sld:CssParameter>
</sld:Font>
<sld:LabelPlacement>
<sld:PointPlacement>
<sld:AnchorPoint>
<sld:AnchorPointX>
<ogc:Literal>0.5</ogc:Literal>
</sld:AnchorPointX>
<sld:AnchorPointY>
<ogc:Literal>1.0</ogc:Literal>
</sld:AnchorPointY>
</sld:AnchorPoint>
<sld:Displacement>
<sld:DisplacementX>
<ogc:Literal>0.0</ogc:Literal>
</sld:DisplacementX>
<sld:DisplacementY>
<ogc:Literal>-10.0</ogc:Literal>
</sld:DisplacementY>
</sld:Displacement>
<sld:Rotation>
<ogc:Literal>0.0</ogc:Literal>
</sld:Rotation>
</sld:PointPlacement>
</sld:LabelPlacement>
<sld:Halo>
<sld:Radius>
<ogc:Literal>1.5</ogc:Literal>
</sld:Radius>
<sld:Fill>
<sld:CssParameter name="fill">#FFFFFF</sld:CssParameter>
</sld:Fill>
</sld:Halo>
<sld:Fill>
<sld:CssParameter name="fill">#000033</sld:CssParameter>
</sld:Fill>
<sld:Priority>200000</sld:Priority>
<sld:VendorOption name="autoWrap">100</sld:VendorOption>
</sld:TextSymbolizer>
Highlights about the above style:
- Uses
FULLNAME
as the label source - Uses a Arial 12pt font
- Places the label below the point, and offsets it by 10 pixel to the south
- Applies a white halo to make it stand out of the background map
- Sets its priority to 200000 (high, important) to make sure the label is depicted in preference to others
- Uses the
autoWrap
option to make it wrap on the next line if it’s larger than 100 pixels (the full list of labelling vendor options is available in the GeoServer user guide).
A Rule is a SLD construct allowing the style editor to control scale dependencies and filter data so that only certain data is depicted using the symbolizers contained in the rule.
The rule for the school points looks as follows:
<sld:Rule>
<sld:Name>school</sld:Name>
<ogc:Filter>
<ogc:PropertyIsEqualTo>
<ogc:PropertyName>MTFCC</ogc:PropertyName>
<ogc:Literal>K2543</ogc:Literal>
</ogc:PropertyIsEqualTo>
</ogc:Filter>
<sld:MaxScaleDenominator>100000</sld:MaxScaleDenominator>
<sld:PointSymbolizer>
<!-- same as above -->
</sld:PointSymbolizer>
<sld:TextSymbolizer>
<!-- same as above -->
</sld:TextSymbolizer>
</sld:Rule>
Highlights about the above rule:
- makes sure the symbolizers are applied only to the features whose
MTFCC = K2543
- shows the symbols only when the scale denominator is below 100000 (e.g., shows them at 1:10000, but not at 1:2000000).
The overall point_landmark
style has 8 different rules using different symbols for each type and amounts to almost 550 lines of XML. The same style could be written in a much more compact way if we could store the symbol name in some attribute and expand it in the external graphic URL.
Standard SLD 1.0 does not allow for that, but GeoServer supports extensions to it known as dynamic symbolizers that allow for generic CQL expressions to be embedded in the URL.
The data directory already contains a secondary layer, bptlandmarks_2876
, which is using a different projection and has a IMAGE
attribute containing the file names.
The style can then be reduced to a single rule using the following point symbolizer:
<sld:PointSymbolizer>
<sld:Graphic>
<sld:ExternalGraphic>
<sld:OnlineResource xlink:type="simple" xlink:href="./img/landmarks/${IMAGE}" />
<sld:Format>image/png</sld:Format>
</sld:ExternalGraphic>
</sld:Graphic>
<VendorOption name="labelObstacle">true</VendorOption>
</sld:PointSymbolizer>
Here is the overall style:
<?xml version="1.0" encoding="UTF-8"?>
<sld:StyledLayerDescriptor
xmlns="http://www.opengis.net/sld"
xmlns:sld="http://www.opengis.net/sld"
xmlns:ogc="http://www.opengis.net/ogc"
xmlns:gml="http://www.opengis.net/gml"
xmlns:xlink="http://www.w3.org/1999/xlink" version="1.0.0">
<sld:UserLayer>
<sld:LayerFeatureConstraints>
<sld:FeatureTypeConstraint/>
</sld:LayerFeatureConstraints>
<sld:UserStyle>
<sld:Name>tl 2010 08013 pointlm</sld:Name>
<sld:Title/>
<sld:FeatureTypeStyle>
<sld:Rule>
<sld:Name>landmarks</sld:Name>
<ogc:Filter>
<ogc:Not>
<ogc:PropertyIsNull>
<ogc:PropertyName>IMAGE</ogc:PropertyName>
</ogc:PropertyIsNull>
</ogc:Not>
</ogc:Filter>
<sld:MaxScaleDenominator>100000</sld:MaxScaleDenominator>
<sld:PointSymbolizer>
<sld:Graphic>
<sld:ExternalGraphic>
<sld:OnlineResource xlink:type="simple" xlink:href="./img/landmarks/${IMAGE}" />
<sld:Format>image/png</sld:Format>
</sld:ExternalGraphic>
</sld:Graphic>
<VendorOption name="labelObstacle">true</VendorOption>
</sld:PointSymbolizer>
<sld:TextSymbolizer>
<sld:Label>
<ogc:PropertyName>FULLNAME</ogc:PropertyName>
</sld:Label>
<sld:Font>
<sld:CssParameter name="font-family">Arial</sld:CssParameter>
<sld:CssParameter name="font-size">12.0</sld:CssParameter>
<sld:CssParameter name="font-style">normal</sld:CssParameter>
<sld:CssParameter name="font-weight">normal</sld:CssParameter>
</sld:Font>
<sld:LabelPlacement>
<sld:PointPlacement>
<sld:AnchorPoint>
<sld:AnchorPointX>
<ogc:Literal>0.5</ogc:Literal>
</sld:AnchorPointX>
<sld:AnchorPointY>
<ogc:Literal>1.0</ogc:Literal>
</sld:AnchorPointY>
</sld:AnchorPoint>
<sld:Displacement>
<sld:DisplacementX>
<ogc:Literal>0.0</ogc:Literal>
</sld:DisplacementX>
<sld:DisplacementY>
<ogc:Literal>-14.0</ogc:Literal>
</sld:DisplacementY>
</sld:Displacement>
<sld:Rotation>
<ogc:Literal>0.0</ogc:Literal>
</sld:Rotation>
</sld:PointPlacement>
</sld:LabelPlacement>
<sld:Halo>
<sld:Radius>
<ogc:Literal>1.5</ogc:Literal>
</sld:Radius>
<sld:Fill>
<sld:CssParameter name="fill">#FFFFFF</sld:CssParameter>
</sld:Fill>
</sld:Halo>
<sld:Fill>
<sld:CssParameter name="fill">#000033</sld:CssParameter>
</sld:Fill>
<sld:Priority>200000</sld:Priority>
<sld:VendorOption name="autoWrap">100</sld:VendorOption>
</sld:TextSymbolizer>
</sld:Rule>
</sld:FeatureTypeStyle>
</sld:UserStyle>
</sld:UserLayer>
</sld:StyledLayerDescriptor>
And here is a map using this alternate style:

Point landmarks using dynamic symbolizers
By default SLD interprets all sizes expressed in the style sheet (e.g., line widths, symbol sizes) as being pixels on the map.
It is however possible to make the style sheet use real world units, e.g., meters or feet, by specifying the desired unit of measure as an attribute of the symbolizer. The supported unit of measure are:
- meter
- foot
- pixel
The following line style uses a line width of 40 meters:
<LineSymbolizer uom="http://www.opengeospatial.org/se/units/metre">
<Stroke>
<CssParameter name="stroke">#000033</CssParameter>
<CssParameter name="stroke-width">40</CssParameter>
</Stroke>
</LineSymbolizer>
Create a new style named
line40m
using the following SLD:<?xml version="1.0" encoding="ISO-8859-1"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name>line40m</Name> <UserStyle> <Title>40 meter wide line</Title> <FeatureTypeStyle> <Rule> <LineSymbolizer uom="http://www.opengeospatial.org/se/units/metre"> <Stroke> <CssParameter name="stroke">#000033</CssParameter> <CssParameter name="stroke-width">40</CssParameter> </Stroke> </LineSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>
Associate the
line40m
toMainRd
as a secondary style:Adding the line40m style as a secondary style for Mainrd
Preview the
MainRd
layer and switch to theline40m
style:A uom based line, zoomed out
Zoom in and out and observe how the width of the line on screen varies by changing the zoom level
![]()
Zooming in on the same line
This section show how to GeoServer provides a number of filter functions that can actually manipulate geometries by transforming them into something different: this is what we call geometry transformations in SLD.
Using skills learned in the adding styles section, create a new style named mainrd_transform using the following SLD:
<?xml version="1.0" encoding="ISO-8859-1"?> <StyledLayerDescriptor version="1.0.0" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd"> <NamedLayer> <Name>Roads and vertices</Name> <UserStyle> <FeatureTypeStyle> <Rule> <LineSymbolizer> <Stroke /> </LineSymbolizer> <PointSymbolizer> <Geometry> <ogc:Function name="vertices"> <ogc:PropertyName>the_geom</ogc:PropertyName> </ogc:Function> </Geometry> <Graphic> <Mark> <WellKnownName>circle</WellKnownName> <Fill> <CssParameter name="fill">#FF0000</CssParameter> </Fill> </Mark> <Size>6</Size> </Graphic> </PointSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>
Note
The
vertices
function returns a multi-point made with all the vertices of the original geometryUsing skills learned in the adding styles section, modify the styling of the
Mainrd
layer and addmainrd_transform
as an alternate style (hint, select themainrd_transform
style in the first list below “available styles” and then use the right arrow to move it in the “selected styles”):

Adding the mainrd_transform style as a secondary style for Mainrd
Use the Preview link to display the Mainrd layer, then open the options box and choose the alternate style from the drop down:
Extracting and showing the vertices out of a geometry
Using skills learned in the geoserver.addstyle section, create a new style mainrd_buffer using the following SLD
<?xml version="1.0" encoding="ISO-8859-1"?> <StyledLayerDescriptor version="1.0.0" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd"> <NamedLayer> <Name>Roads and vertices</Name> <UserStyle> <FeatureTypeStyle> <Rule> <PolygonSymbolizer> <Geometry> <ogc:Function name="buffer"> <ogc:PropertyName>the_geom</ogc:PropertyName> <ogc:Literal>200</ogc:Literal> </ogc:Function> </Geometry> <Fill> <CssParameter name="fill">#7F7F7F</CssParameter> <CssParameter name="fill-opacity">0.3</CssParameter> </Fill> </PolygonSymbolizer> <LineSymbolizer> <Stroke /> </LineSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>
Note
The
buffer
function builds a polygon of all the points that are withing the specified distance from the original geometry.As done previously, modify the styling of the
Mainrd
layer and addmainrd_buffer
as an alternate style:

Adding the mainrd_buffer style as a secondary style for Mainrd
Use the Map Preview to preview the new style.
Extracting start and end point of a line
GeoServer can produce maps with charts through the chart extension. Bundled with GeoServer is an open source version of the (deprecated) Google Chart API called Eastwood Charts.
You can display bar or pie charts (Most Google Charts except for Google-o-meter and spider charts are supported by the Eastwood library but the same does not apply to the corresponding GeoServer extension) for each feature on your map. You can control colors or labels. You can use percentages that are in your data attributes or compute percentages from counts on the fly.
The Charting Extension makes usage of a URL inside the <ExternalGraphic> element of SLD documents. The URL used follows the Google Chart API syntax, but the chart is generated internally in GeoServer, hence no call to external services made removing any privacy or security concern and providing maximum performance. All the information about the chart that you want, such as chart data, size, colors, and labels, are part of the URL.
Inside the URL we can use variable substitution for using the attributes of the underlying features that are read from the datasource allowing us to create stunning dynamic charts using our own data.
An example of a chart created using an <ExternalGraphic> element is shown here below:
<ExternalGraphic> <OnlineResource xlink:href="http://chart?cht=p&chd=t:${100 * MALE / PERSONS},${100 * FEMALE / PERSONS}&chf=bg,s,FFFFFF00" /> <Format>application/chart</Format> </ExternalGraphic>
All URLs start with https://chart? followed by the parameters that specify chart data and appearance. Parameters are name=value pairs, separated by an ampersand character (&), and parameters can be in any order, after the ?. All charts require at minimum the following parameters: cht (chart type), chd (data), and chsv (chart size). However, there are many more parameters for additional options, and you can specify as many additional parameters as the chart supports.
We are now going to see examples and explanation for the various types of charts supported. First of all we will start with the standard features support by all the charts.
All Chart URLs have the following format:
https://chart?cht=<chart_type>&chd=<chart_data>&chs=<chart_size>&...more_parameters...
The standard parameters as part of the above URL have the following meaning:
- The cht parameter allows us to control the type of charts; as an example cht=p can be used for a 2D (flat) Pie.
- The chs parameter allows us to control the size of charts; as an example chs=500x200 specifies the chart size (width x height), in pixels. As an alternative we can use the <Size> element of external graphics (we’ll show an example in the following.
- The chd parameter allows us to control the chart data; as an example chd=t:60,40 can be used to provide tabular data to the diagram rendering system. We can use variable substitution and other GeoServer mechanisms to pass data sources value as the chart data. A typical example would be something like chd=t:${100 * MALE / PERSONS},${100 * FEMALE / PERSONS}& where MALE, PERSONS and FEMALE are attribute of GeoServer data sources.
- The chl parameter allows us to control the label of charts; as an example chl=Male|Female can be used to label a chart.
Quoting Wikipedia,
“A pie chart (or a circle graph) is a circular chart divided into sectors, illustrating numerical proportion. In a pie chart, the arc length of each sector (and consequently its central angle and area), is proportional to the quantity it represents.”
Let us know create a sample map using the Pie Charts element leveraging on the the data provided with the training. Afterwards we will review the various options.
To print dynamic charts on a map using a Pie symbol over the United Stats map add a new style called statespies by adding the SLD provided below as indicated in this picture.
In the SLD Editor enter the following XML:
<?xml version="1.0" encoding="ISO-8859-1"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name></Name> <UserStyle> <Name>Pie charts</Name> <FeatureTypeStyle> <Rule> <PolygonSymbolizer> <Fill> <CssParameter name="fill">#AAAAAA</CssParameter> </Fill> <Stroke /> </PolygonSymbolizer> </Rule> </FeatureTypeStyle> <FeatureTypeStyle> <Rule> <PointSymbolizer> <Geometry> <ogc:Function name="centroid"> <ogc:PropertyName>the_geom</ogc:PropertyName> </ogc:Function> </Geometry> <Graphic> <ExternalGraphic> <OnlineResource xlink:href="http://chart?cht=p&chd=t:${100 * MALE / PERSONS},${100 * FEMALE / PERSONS}&chf=bg,s,FFFFFF00" /> <Format>application/chart</Format> </ExternalGraphic> <Size> <ogc:Add> <ogc:Literal>20</ogc:Literal> <ogc:Mul> <ogc:Div> <ogc:PropertyName>PERSONS</ogc:PropertyName> <ogc:Literal>20000000.0</ogc:Literal> </ogc:Div> <ogc:Literal>60</ogc:Literal> </ogc:Mul> </ogc:Add> </Size> </Graphic> </PointSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>
In order to have the states layer use this style with no additional indications, modify the default style of the states layer using the user interface to point to the newly created statespies.
Now go to the Layer Preview to view the new style in action.
Let us quickly analyse the components of the ExternalGraphic call, which follow the rules of a Google Charts API call:
Pie Chart Types
The cht parameter allows us to control the type of pie. Supported options are as follows:
- cht=p for a 2D (flat) Pie
- cht=p3 for a 3D (flat) Pie
- cht=pc is not supported.
Pie Chart Data
chd=t:${100 * MALE / PERSONS},${100 * FEMALE / PERSONS}
the chart data is expressed in “text” format, and in particular, the first value is the result of 100 * MALE / PERSONS, where MALE and PERSONS are two attributes of feature being rendered
Pie Chart Background
chf=bg,s,FFFFFF00: we state that the chart background fill is solid, white and transparent. In articular, the color is expressed as RRGGBBAA, where AA is the alpha component, which controls transparency. In particular 0 is fully transparent, 255 is fully opaque
Pie Chart Size
The size of the chart is controlled using the usual <Size> element of external graphics, an in particular, it’s setup so that it’s proportional to the PERSONS attribute via the expression: 20 + (PERSONS / 20,000,000) * 60.
Pie Chart Colors
We can specify the colors of all values, each value, or some values using the chco parameter. This override the usage of the default Background Fills chf parameter, hence it is optional.
Syntax is as follows:
chco=<color_slice_1>,<color_slice_2>
for specifying individual colors for slices and
chco=<color_1>|<color_2>
for specifying a gradient to be applied to the slices.
where color is in RRGGBB hexadecimal format.
Pie Chart Labels
We can specify labels for individual pie chart slices using the chl parameter.
The syntax is a follows:
chl=<label_value>| ... |<label_value>
Pie Chart Rotation
Pie Chart Rotation can be achieved via the chp parameter. By default, the first series is drawn starting at 3:00, continuing clockwise around the chart.
The syntax is as follows:
chp=<radians>
Additional information on creating pie charts can be found on the official pie charts documentation
A more comprehensive example can be found here below:
<?xml version="1.0" encoding="ISO-8859-1"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name></Name> <UserStyle> <Name>Pie charts</Name> <FeatureTypeStyle> <Rule> <PolygonSymbolizer> <Fill> <CssParameter name="fill">#ffffff</CssParameter> </Fill> <Stroke /> </PolygonSymbolizer> </Rule> </FeatureTypeStyle> <FeatureTypeStyle> <Rule> <PointSymbolizer> <Geometry> <ogc:Function name="centroid"> <ogc:PropertyName>the_geom</ogc:PropertyName> </ogc:Function> </Geometry> <Graphic> <ExternalGraphic> <OnlineResource xlink:href="http://chart?cht=p&chf=bg,s,FFFFFF00&chd=t:${100 * MALE / PERSONS},${100 * FEMALE / PERSONS}&chl=MALE|FEMALE&chs=200x100&chco=0000ff,ff0000&chtt=M+F" /> <Format>application/chart</Format> </ExternalGraphic> </Graphic> </PointSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>
The resulting image can be found here below:
Quoting Wikipedia,
“A bar chart or bar graph is a chart with rectangular bars with lengths proportional to the values that they represent. The bars can be plotted vertically or horizontally. A vertical bar chart is sometimes called a column bar chart.”
Let us know create a sample map using the Bar Charts element leveraging on the the data provided with the training. Afterwards we will review the various options.
To print dynamic charts on a map using a Bar symbol over the United Stats map add a new style called statesbars by adding the SLD provided below as indicated in this picture.
In the SLD Editor enter the following XML:
<?xml version="1.0" encoding="ISO-8859-1"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name></Name> <UserStyle> <Name>Pie charts</Name> <FeatureTypeStyle> <Rule> <PolygonSymbolizer> <Fill> <CssParameter name="fill">#ffffff</CssParameter> </Fill> <Stroke /> </PolygonSymbolizer> </Rule> </FeatureTypeStyle> <FeatureTypeStyle> <Rule> <PointSymbolizer> <Graphic> <Geometry> <ogc:Function name="centroid"> <ogc:PropertyName>the_geom</ogc:PropertyName> </ogc:Function> </Geometry> <ExternalGraphic> <OnlineResource xlink:href="http://chart?cht=bvg&chf=bg,s,FFFFFF00&chd=t:${100 * MALE / PERSONS},${100 * FEMALE / PERSONS}" /> <Format>application/chart</Format> </ExternalGraphic> <Size> <ogc:Add> <ogc:Literal>20</ogc:Literal> <ogc:Mul> <ogc:Div> <ogc:PropertyName>PERSONS</ogc:PropertyName> <ogc:Literal>20000000.0</ogc:Literal> </ogc:Div> <ogc:Literal>60</ogc:Literal> </ogc:Mul> </ogc:Add> </Size> </Graphic> </PointSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>
Let us quickly analyse the components of the ExternalGraphic call, which follow the rules of a Google Charts API call:
Bar Chart Types
The cht parameter allows us to control the type of pie. Supported options are as follows:
- cht=bvg for simple 2D vertical Bars layed out as groups.
- cht=bhg for simple 2D horizontal Bars layed out as groups.
- cht=bvs for simple 2D vertical Bars layed out as stacks.
- cht=bvo is not supported.
Bar Chart Data
chd=t:${100 * MALE / PERSONS},${100 * FEMALE / PERSONS}
the chart data is expressed in “text” format, and in particular, the first value is the result of 100 * MALE / PERSONS
, where MALE and PERSONS are two attributes of feature being rendered. This type of sequence is good for grouped bar charts. Values for successive groups are separated by |
. Values within the same group are separated by comma.
Bar Chart Colors
Note
Note that by default, all series are displayed in the same color; if you don’t specify different colors for different series, it will be hard to distinguish that there are multiple series in your chart.
You can specify the colors of individual bars, individual series, or multiple series using the chco parameter. If you don’t specify a different color for each series, all series will be the same color. Syntax is as follows:
chco=<series_1_color>, ..., <series_n_color>
or
chco=<series_1_bar_1>|<series_1_bar_2>|...|<series_1_bar_n>,<series_2>,...,<series_n>
where color is in RRGGBB hexadecimal format.
Bar Chart Background
chf=bg,s,FFFFFF00
: we state that the chart background fill is solid, white and transparent. In particular, the color is expressed as RRGGBBAA, where AA is the alpha component, which controls transparency. In particular 0 is fully transparent, 255 is fully opaque.
Bar Chart Size
The size of the chart is controlled using the usual <Size> element of external graphics, an in particular, it’s setup so that it’s proportional to the PERSONS attribute via the expression: 20 + (PERSONS / 20,000,000) * 60.
Bar Chart Labels
Bar charts support standard axis labels, but labels along the base of the bars are assigned to individual bars, rather than spread out along the bar chart. (To spread out labels evenly, use the chxp parameter as described below.) If you specify axis labels but don’t specify custom labels along the bar axis, the bar labels will be the index number of each bar or group of bars. You can customize axis labels using the chxl parameter.
The syntax is a follows:
chl=<label_value>| ... |<label_value>
Additional information on creating pie charts can be found on the official bar chart documentation
A more comprehensive example can be found here below:
<?xml version="1.0" encoding="ISO-8859-1"?> <StyledLayerDescriptor version="1.0.0" xsi:schemaLocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/1.0.0/StyledLayerDescriptor.xsd" xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <NamedLayer> <Name></Name> <UserStyle> <Name>Pie charts</Name> <FeatureTypeStyle> <Rule> <PolygonSymbolizer> <Fill> <CssParameter name="fill">#dddddd</CssParameter> </Fill> <Stroke /> </PolygonSymbolizer> </Rule> </FeatureTypeStyle> <FeatureTypeStyle> <Rule> <PointSymbolizer> <Graphic> <ExternalGraphic> <OnlineResource xlink:href="http://chart?chxt=x,y&chxl=0:|M|F&cht=bvg&chco=0000ff,ff0000&chf=bg,s,FFFFFF00&chd=t:${100 * MALE / PERSONS}|${100 * FEMALE / PERSONS}&chs=200x200&chtt=M+F" /> <Format>application/chart</Format> </ExternalGraphic> </Graphic> </PointSymbolizer> </Rule> </FeatureTypeStyle> </UserStyle> </NamedLayer> </StyledLayerDescriptor>
The resulting image can be found here below:
In the previous section we have created and optimized some vector styles. In this section we will deal with a styled SRTM raster and we will see how to get a better visualization of that layer by adding hillshade.
From the Welcome Page navigate to and select the OpenLayers link for the
geosolutions:srtm
layer.SRTM rendering with DEM style
There is a DEM style associated to that SRTM dataset layer, resulting in such a colored rendering.
Return to the GeoServer Welcome Page, select the
and click thedem
style to see which color map is applied.Note
You have to be logged in as Administrator in order to edit/check styles.
Style editing
Note the entries with
opacity = 0.0
which allow to make no data values as transparent.
The current DEM style allows to get a pleasant rendering of the SRTM dataset but we can get better results by combining it with an hillshade layer which will be created through another GDAL utility (gdaldem).
Open a shell and run:
* Linux gdaldem hillshade -z 5 -s 111120 ${TRAINING_ROOT}/geoserver_data/data/boulder/srtm_boulder.tiff ${TRAINING_ROOT}/geoserver_data/data/boulder/srtm_boulder_hs.tiff -co tiled=yes * Windows gdaldem hillshade -z 5 -s 111120 %TRAINING_ROOT%\geoserver_data\data\boulder\srtm_boulder.tiff %TRAINING_ROOT%\geoserver_data\data\boulder\srtm_boulder_hs.tiff -co tiled=yes
Note
The
z
parameter exaggerates the elevation, thes
parameter provides the ratio between the elevation units and the ground units (degrees in this case),-co tiled=yes
makes gdaldem generate a TIFF with inner tiling. We’ll investigate this last option better in the following pages.From the Welcome Page navigate to and select Add a new style as previously seen in the Adding a style section.
In the SLD Editor enter the following XML:
<?xml version="1.0" encoding="UTF-8"?> <sld:StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:sld="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:gml="http://www.opengis.net/gml" version="1.0.0"> <sld:UserLayer> <sld:LayerFeatureConstraints> <sld:FeatureTypeConstraint/> </sld:LayerFeatureConstraints> <sld:UserStyle> <sld:Title/> <sld:FeatureTypeStyle> <sld:Name>name</sld:Name> <sld:FeatureTypeName>Feature</sld:FeatureTypeName> <sld:Rule> <sld:MinScaleDenominator>75000</sld:MinScaleDenominator> <sld:RasterSymbolizer> <sld:Geometry> <ogc:PropertyName>grid</ogc:PropertyName> </sld:Geometry> <sld:ColorMap> <sld:ColorMapEntry color="#000000" opacity="0.0" quantity="0.0"/> <sld:ColorMapEntry color="#999999" opacity="0.7" quantity="1.0"/> <sld:ColorMapEntry color="#FFFFFF" opacity="0.7" quantity="256.0"/> </sld:ColorMap> </sld:RasterSymbolizer> </sld:Rule> </sld:FeatureTypeStyle> </sld:UserStyle> </sld:UserLayer> </sld:StyledLayerDescriptor>
Note
Note the opacity values being less than 1, in order to made it partially transparent which will allows to do overlaying on other layers
Set
hillshade
as name and then click the Submit button.Select Add stores from the GeoServer Welcome Page to add the previously created
hillshade
raster.Select GeoTIFF - Tagged Image File Format with Geographic information from the set of available Raster Data Sources.
Specify
hillshade
as name in the Data Source Name field of the interface.Click on browse link in order to set the GeoTIFF location in the URL field.
Note
make sure to specify the
srtm_boulder_hs.tiff
previously created with gdaldem, which should be located at$TRAINING_ROOT/geoserver_data/data/boulder
Click Save.
Publish the layer by clicking on the publish link.
Publishing Raster Layer
Set
SRTM Hillshade
as TitleSwitch to Publishing tab
Make sure to set the default style to
hillshade
on the Publishing –> Default Style section.Editing Raster Publishing info
Click Save to create the new layer.
Use the Layer Preview to preview the new layer with the hillshade style.
Previewing the new raster layer with the hillshade style applied
Edit the Layer Preview URL in your browser by locating the layers parameter
Insert the geosolutions:srtm, additional layer (note the final comma) before the geosolutions:hillshade one, and in the styles parameter, add a comma before hillshade to make GeoServer use the default style for the srtm layer
Press Enter to send the updated request. The Layer Preview should change like this where you can see both the srtm and hillshade layers.
Layer preview with srtm and hillshade being overlaid
Styling with CSS¶
The CSS extension module allows to build map styles using a compact, expressive styling language already well known to most web developers: Cascading Style Sheets.
The standard CSS language has been extended to allow for map filtering and managing all the details of a map production. In this section we’ll experience creating a few simple styles with the CSS language.
Creating line styles¶
From the main menu bar select the CSS styles entry
Click on the “Choose a different layer” link and switch to the Mainrd layer
Click on the “Create a new style” link and input
css_mainrd
as the style name, then press the “Create” buttonCreating a new CSS style for the Mainrd layer
Set the style contents to the following, press submit and switch to the map preview
* { stroke: orange; stroke-width: 6; stroke-linecap: round; }
Now let’s create a cased line effect by adding a second set of colours and widths, and forcing two different z indexes for them. Press submit, look at the map and at the generated SLD
* { stroke: orange, yellow; stroke-width: 6, 2; stroke-linecap: round; z-index: 1, 2; }
Finally, let’s add a label that follows the road
* { stroke: orange, yellow; stroke-width: 6, 2; stroke-linecap: round; z-index: 1, 2; label: [LABEL_NAME]; font-fill: black; font-family: Arial; font-size: 12; font-weight: bold; halo-color: white; halo-radius: 2; -gt-label-follow-line: true; -gt-label-group: true; -gt-label-repeat: 400; -gt-label-max-displacement: 50; }
Creating point styles¶
Similarly to the previous section, switch the map to “bptlandmarks” and create a new style called “css_bptlandmarks”
Insert the following in the CSS to get a basic point style, and press “Submit”:
* { mark: symbol('circle'); mark-size: 5; }
Let’s change the color of the points by specifying a fill. If we specified a fill in the top level rule it would be interpreted as a polygonal fill, to express that we want to fill inside the marks we have to create a new rule with the
:mark
pseudo-selector:* { mark: symbol('circle'); mark-size: 5; } :mark { fill: cyan; stroke: darkblue; }
Finally, let’s override the default styling for all shopping centers. Shopping centers are not easy to find, they have a
MTFCC
category ofC3081
and containShopping
in the name* { mark: symbol('circle'); mark-size: 5; } :mark { fill: cyan; stroke: darkblue; } [MTFCC = 'C3081' AND FULLNAME LIKE '%Shopping%'] { mark: url("./img/landmarks/shop_supermarket.p.16.png"); mark-size: ; }
Creating polygon styles¶
For this exercise, change the current layer to “WorldCountries” and create a new style called “css_worldcountries”
We want to create a simple 3 class thematic map based on the country population, stored in the
POP_EST
attribute[POP_EST < 10000000] { fill: lightgrey; } [POP_EST >= 10000000 AND POP_EST < 50000000] { fill: olive; } [POP_EST > 50000000] { fill: salmon }
Let’s also add a very thin black border around all polygons, regardless of their population, using the
*
selector[POP_EST < 10000000] { fill: lightgrey; } [POP_EST >= 10000000 AND POP_EST < 50000000] { fill: olive; } [POP_EST > 50000000] { fill: salmon } * { stroke: black; stroke-width: 0.2; }
Styling raster data¶
For this exercise we are going to switch to the “srtm” layer and create a new
css_raster
styleIn order to activate raster styling the
raster-channels
property needs to be specified, in this case with a value ofauto
to let the engine choose automatically the bands to use:* { raster-channels: auto; }
The above map shows GeoServer automatically creating a grayscale map out of the elevation data, with automatic adaptation to the current contents of the map (the black areas you see once applied the map are “no data” areas, try to go into an area that does not have any)
Now let’s apply a color map to get a nicer and consistent looking map instead
* { raster-channels: auto; raster-color-map: color-map-entry(black, 0, 0) color-map-entry(green, 1500) color-map-entry(yellow, 2000) color-map-entry(maroon, 3000) color-map-entry(white, 4000); }
Creating a Base Map with a Layer Group¶
The best way to easily set-up a map with more than one layer for consumption is to create a Layer Group, that is what we are going to do in this section.
Locate the Layer Group link and click it.
Click the Add new layer group link.
Add new layer group link
Name it
test
.Click the Add layer link:
Add new layer
Select the “Mainrd” layer in the popup window.
Select a layer
Add also “BoulderCityLimits” and “bplandmarks”, the final list should look like this:.
List of layers for the group
Note
You can use the green arrows to adjust the ordering of the layers until it looks like the above figure.
Click the generate bounds button to have GeoServer compute the group bounds from the layers inside of it:
Scroll to the bottom of the page and then click Save.
If all went well, you should see something like this:
After a successful save.
Note
The autogenerated bounds may be too large and you may experience a bad feeling when previewing the map. You can optionally reduce the layer group bounds by inserting manually the bbox values. Good values are the following: minx = 3.057.566,8646; maxx = 3.079.500,65246; miny = 1.241.929,35617; maxy = 1.257.467,5777
The layer group is now ready to be consumed:
Navigate to the GeoServer Welcome Page.
Go to the Layer Preview link at the bottom of the lefthand menu.
Layer Preview
Find the test layer group and click on the OpenLayers link. You will see a slippy map with all the configured layers of the Boulder district. You can control zoom by using the mouse wheel, pan by dragging, and zoom by window holding
SHIFT
pressed while dragging.OpenLayers view
Note
Check the browser’s address bar for an interesting sample WMS request for the layer.
As you might have noticed before, a larger, more realistic group has already been configured for you. It is named
boulder
. Have a look at its definition and add to this the Mainrd layer. Then using the green arrows move the layer at the following position (see the screenshot).A new layer inside the existing layer group.
Then use the Map Preview to display it.
Try clicking in the middle of the map. A couple of tables with more information about the vector features that have been clicked should appear at the bottom.
Feature info
Try zoomin in more and more. New layers should start to appear. This is scale dependent styling.
Now let’s see how desktop clients handle the layer group, and how we can change the way the see it.
Go to the command line, enter the workshop directory, if you haven’t done that yet, run
setenv.bat
and thenudig.bat
Once both GeoServer and uDig are up, organize their Windows so that you can see GeoServer and uDig ones at the same time
Now go to the GeoServer home page, where all the capabilities links are kept, take the WMS 1.1.1 one, and drag&drop it into uDig “Catalog” tab to import the WMS as a uDig data source:
Look at the layer tree. The
boulder
group is visible as a simple layer, and all the layers it contains are actually shown at the same level as the group.Let’s change it so that the layer group internal structure is shown. Go back to the “boulder” layer group page, and change its “mode” to “Named tree”, then press the “Save” button
We need to make uDig aware of the change. Right click the root of the the capabilities tree and choose the “Reset” command
Now most of the layers are contained inside the “boulder” group
Filtering Maps¶
This section shows the GeoServer WMS filtering capabilities.
Navigate to the GeoServer Welcome Page.
Go to the Layer Preview link at the bottom of the left-hand menu and show the geosolutions:WorldCountries layer with OpenLayers ‘Common Format’.
Showing the GeoServer layer preview
Show the layer with OpenLayers
From the Filter combo box select ‘CQL’ and enter the following command in the text field:
POP_EST <= 5000000 AND POP_EST >100000
Click ‘Apply Filter’ button on the right.
Result of the CQL filter
The corresponding WMS request is:
http://localhost:8083/geoserver/geosolutions/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:WorldCountries&styles=&bbox=-180.0,-89.99889902136009,180.00000000000003,83.59960032829278&width=684&height=330&srs=EPSG:4326&format=image/png&CQL_FILTER=POP_EST%20%3C=%205000000%20AND%20POP_EST%20%3E100000
Now enter the following command in the text field:
DISJOINT(the_geom, POLYGON((-90 40, -90 45, -60 45, -60 40, -90 40))) AND strToLowerCase(NAME) LIKE '%on%'
Click ‘Apply Filter’ button on the right.
Result of the CQL filter
The corresponding WMS request is:
http://localhost:8083/geoserver/geosolutions/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:WorldCountries&styles=&bbox=-180.0,-89.99889902136009,180.00000000000003,83.59960032829278&width=684&height=330&srs=EPSG:4326&format=image/png&CQL_FILTER=DISJOINT%28the_geom%2C%20POLYGON%28%28-90%2040%2C%20-90%2045%2C%20-60%2045%2C%20-60%2040%2C%20-90%2040%29%29%29%20AND%20strToLowerCase%28NAME%29%20LIKE%20%27%25on%25%27
From the Filter combo box select ‘OGC’ and enter the following filter in the text field:
<Filter><PropertyIsEqualTo><PropertyName>TYPE</PropertyName><Literal>Sovereign country</Literal></PropertyIsEqualTo></Filter>
Click ‘Apply Filter’ button on the right.
Result of the OGC filter
The corresponding WMS request is
http://localhost:8083/geoserver/geosolutions/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:WorldCountries&styles=&bbox=-180.0,-89.99889902136009,180.00000000000003,83.59960032829278&width=684&height=330&srs=EPSG:4326&format=image/png&CQL_FILTER=TYPE%20%3D%20%27Sovereign%20country%27
From the Filter combo box select ‘FeatureID’ and enter the following features ids in the text field separated by comma:
WorldCountries.227,WorldCountries.184,WorldCountries.33
Click ‘Apply Filter’ button on the right.
Result of the FeatureID filter
The corresponding WMS request is:
http://localhost:8083/geoserver/geosolutions/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:WorldCountries&styles=&bbox=-180.0,-89.99889902136009,180.00000000000003,83.59960032829278&width=684&height=330&srs=EPSG:4326&format=image/png&FEATUREID=WorldCountries.227,WorldCountries.184,WorldCountries.33
Producing and Using palettes¶
GeoServer has the ability to output high quality 256 color images. This tutorial introduces you to the palette concepts, the various image generation options, and offers a quality/resource comparison of them in different situations. In this section the task is to use the palettes.
Note
Some image formats, such as GIF or PNG, can use a palette, which is normally a table of 256 colors use get get better compression (trading it sometimes with a lower image quality). Basically, instead of representing each pixel with its full color triplet, which takes 24bits (plus eventual 8 more for transparency), they use a 8 bit index that represent the position inside the palette, and thus the color. This allows for images that are 3-4 times smaller than the standard images, with the limitation that only 256 different colors can appear on the image itself. Depending of the actual map, this may be a very stringent limitation, visibly degrading the image quality, or it may be that the output cannot be told from a full color image. For many common vector maps one can easily find 256 representative colors that are a good fit. In the latter case, the smaller footprint of paletted images is usually a gain in both performance and costs, because more data can be served with the same internet connection, and the clients will obtain responses faster.
Options to enable paletted output¶
The easiest way to get a paletted image output is to ask for a 256 color output format, such as:
- image/png8: PNG output, with a 256 color palette
- image/gif: standard GIF output
These output formats, if no other parameters are provided, do compute the optimal palette on the fly. This is an expensive process (CPU bound) but, depending on the speed of the network connecting the server and the client, the extra CPU cost can be offset by a lower data transfer time (especially on slow/busy networks).
Optimal palette computation is anyway a repetitive work that can be done up front: a user can compute the optimal palette once, and tell GeoServer to use it. There are three ways to do so:
- Use the internet safe palette, a standard palette built in into GeoServer, by appending palette=safe to the GetMap request. Of course, to get good results, the styling will have to be made using the colors in that palette.
- Provide a palette by example. In this case, the user will generate an 256 color images using an external program (such as Photoshop), and then will save it into the $GEOSERVER_DATA_DIR/palettes directory. The sample file can be either in GIF or PNG format. If the file is named mypalette.gif or mypalette.png, the user will be able to refer it appending palette=mypalette to the GetMap request. GeoServer will load the palette from the file and use it.
- Provide a palette file. The process is just as before, but this time only the palette, in .PAL format, will be stored into the $GEOSERVER_DATA_DIR/palettes directory. The PAL file in in Microsoft Palette Format, and can be generated by programs such as Paint Shop Pro and IrfanView.
An Example with Vector Data¶
Enough theory, let’s have a look at how to deal with paletted images in practice. We’ll use the prato
basemap to gather some numbers and we’ll change various parameters in order to play with formats and palettes. Here goes the sampler:
The standard PNG full color output:
http://localhost:8083/geoserver/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:BoulderCityLimits,geosolutions:blakes,geosolutions:bplandmarks,geosolutions:brivers,geosolutions:Mainrd&styles=&bbox=3056181.93510,1237476.92868,3080671.07513,1260141.38768&width=512&height=475&srs=EPSG:2876&format=image/png

The standard PNG output
Parameters:FORMAT=image/png
| Size: 105.5 KB | Map generation time: 186 ms
JPEG output:
http://localhost:8083/geoserver/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:BoulderCityLimits,geosolutions:blakes,geosolutions:bplandmarks,geosolutions:brivers,geosolutions:Mainrd&styles=&bbox=3056181.93510,1237476.92868,3080671.07513,1260141.38768&width=512&height=475&srs=EPSG:2876&format=image/jpeg

JPEG output
Parameters:FORMAT=image/jpeg
| Size: 43.2 KB | Map generation time: 100 ms
The PNG8 output:
http://localhost:8083/geoserver/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:BoulderCityLimits,geosolutions:blakes,geosolutions:bplandmarks,geosolutions:brivers,geosolutions:Mainrd&styles=&bbox=3056181.93510,1237476.92868,3080671.07513,1260141.38768&width=512&height=475&srs=EPSG:2876&format=image/png8

The PNG8 output
Parameters:FORMAT=image/png8
| Size: 48.0 KB | Map generation time: 190 ms
PNG + internet safe palette:
http://localhost:8083/geoserver/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:BoulderCityLimits,geosolutions:blakes,geosolutions:bplandmarks,geosolutions:brivers,geosolutions:Mainrd&styles=&bbox=3056181.93510,1237476.92868,3080671.07513,1260141.38768&width=512&height=475&srs=EPSG:2876&format=image/png&palette=safe

The PNG output + internet safe palette
Parameters:FORMAT=image/png&palette=safe
| Size: 38.8 KB | Map generation time: 161 ms
PNG + palette by example:
http://localhost:8083/geoserver/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:BoulderCityLimits,geosolutions:blakes,geosolutions:bplandmarks,geosolutions:brivers,geosolutions:Mainrd&styles=&bbox=3056181.93510,1237476.92868,3080671.07513,1260141.38768&width=512&height=475&srs=EPSG:2876&format=image/png&palette=boulder

The PNG output palette by example
Parameters:FORMAT=image/png&palette=boulder
| Size: 54.4 KB | Map generation time: 163 ms
Generating the custom palette¶
To generate a custom palette you can use IrfanView for example, on Windows. The steps are simple:
- open the png 24 bit version of the image
- use Image/Decrease Color Depth and set 256 colors
- use Image/Palette/Export to save the palette
An example with raster data¶
To give you an example when paletted images may not fit the bill, let’s consider the geosolutions:13tde815295_200803_0x6000m_cl coverage from the sample data, and repeat the same operation as before.
The standard PNG full color output:
http://localhost:8083/geoserver/geosolutions/wms?LAYERS=geosolutions%3A13tde815295_200803_0x6000m_cl&STYLES=&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A26913&BBOX=482574.82910157,4429949.7070313,482949.82910157,4430324.7070313&WIDTH=512&HEIGHT=512&FORMAT=image%2Fpng

The standard PNG output
Parameters:FORMAT=image/png
| Size: 528.9 KB | Map generation time:90ms
JPEG output:
http://localhost:8083/geoserver/geosolutions/wms?LAYERS=geosolutions%3A13tde815295_200803_0x6000m_cl&STYLES=&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A26913&BBOX=482574.82910157,4429949.7070313,482949.82910157,4430324.7070313&WIDTH=512&HEIGHT=512&FORMAT=image%2Fjpeg

JPEG output
Parameters:FORMAT=image/jpeg
| Size: 39.5 KB | Map generation time: 35ms
PNG8 output (the output using a “palette by example would be the same”):
http://localhost:8083/geoserver/geosolutions/wms?LAYERS=geosolutions%3A13tde815295_200803_0x6000m_cl&STYLES=&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A26913&BBOX=482574.82910157,4429949.7070313,482949.82910157,4430324.7070313&WIDTH=512&HEIGHT=512&FORMAT=image%2Fpng8

PNG8 output
Parameters:FORMAT=image/png8
| Size: 141.8 KB | Map generation time: 201ms
PNG output + safe palette:
http://localhost:8083/geoserver/geosolutions/wms?LAYERS=geosolutions%3A13tde815295_200803_0x6000m_cl&STYLES=&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG%3A26913&BBOX=482574.82910157,4429949.7070313,482949.82910157,4430324.7070313&WIDTH=512&HEIGHT=512&FORMAT=image%2Fpng&palette=safe

PNG + sape palette output
Parameters:FORMAT=image/png&palette=safe
| Size: 96.8 KB | Map generation time: 235ms
Note
As the sampler shows, the JPEG output has the same quality as the full color image, is generated faster and uses only a fraction of its size. At the opposite, the version using the internet safe palette is fast and smaller than the full PNG, but the output is totally ruined. Everything considered, JPEG is the clear winner, sporting good quality, fast image generation and smaller size that. PNGs are the suggested imagery raster format only in case the output needs to be used as an overlay and thus requires transparent areas, or when the raster has large areas with uniform colors, which may happen for example in land use rasters.
Decorating a Map¶
WMS Decorations provide a framework for visually annotating images from WMS with absolute, rather than spatial, positioning. This example of decoration include scaleline, legends, and image.
Go to $GEOSERVER_DATA_DIR and create a new directory named layouts and create a new file named boulder_ly.xml inside it.
Inside the boulder_ly.xml file enter the following XML (replace
${GEOSERVER_DATA_DIR}
with your actual path, e.g.,file://C:/training/geoserver_data
):<layout> <decoration type="image" affinity="top,left" offset="45,8" size="174,60"> <option name="url" value="${GEOSERVER_DATA_DIR}/geosolutions-logo-tx.png" /> </decoration> <decoration type="text" affinity="bottom,right" offset="3,3"> <option name="message" value="Boulder City" /> <option name="font-size" value="14" /> <option name="font-color" value="#FFFFFF" /> <option name="halo-radius" value="1" /> <option name="halo-color" value="#000000" /> </decoration> <decoration type="scaleline" affinity="bottom,left" offset="3,3" /> <decoration type="legend" affinity="top,right" offset="6,6" size="auto" /> </layout>
Save and close the file.
Go to the Layer Preview to preview the new map decoration on geosolutions:Mainrd layer. Once the layout boulder_ly.xml is defined, request it by adding format_options=layout:boulder_ly to the request parameters.
Map decoration
The request:
http://localhost:8083/geoserver/geosolutions/wms?service=WMS&version=1.1.0&request=GetMap&layers=geosolutions:Mainrd&styles=&bbox=3048474.661,1226045.092,3095249.0,1279080.5&width=451&height=512&srs=EPSG:2876&format=application/openlayers&format_options=layout:boulder_ly
Note
Zoom-in until the layer and legend appears since for this layer we have scale_denominator based rules. Also you can apply this format_layout to any layer, but be careful with the overalys since you will have all the legends printed out on the right-top side of the map.
Accessing Map information¶
This workshop section describes how to use the GeoServer template system to create custom HTML GetFeatureInfo responses. GetFeatureInfo is a WMS standard call that allows one to retrieve information about features and coverages displayed in a map.
The map can be composed of various layers, and GetFeatureInfo can be instructed to return multiple feature descriptions, which may be of different types. GetFeatureInfo can generate output in various formats: GML2, plain text and HTML.
Templating is concerned with the HTML one.
Go to the Layer preview to show geosolutions:bplandmarks layer.
Click for example on the Rocky Mountain Natl Park region in the OpenLayers map to show the FeatureInfo.
Default GetFeatureInfo request
In order to configure a custom template of the GetFeatureInfo results create three files .FTL in $GEOSERVER_DATA_DIR/workspaces/geosolutions directory named:
- header.ftl - content.ftl - footer.ftl
Note
The Template is managed using Freemarker. This is a simple yet powerful template engine that GeoServer uses whenever developers allowed user customization of textual outputs. In particular, at the time of writing it’s used to allow customization of GetFeatureInfo, GeoRSS and KML outputs.
Note
Splitting the template in three files allows the administrator to keep a consistent styling for the GetFeatureInfo result, but use different templates for different workspaces or different layers: this is done by providing a master
header.ftl
andfooter.ftl
file, but specify a differentcontent.ftl
for each layer.In header.ftl file enter the following HTML:
<#-- Header section of the GetFeatureInfo HTML output. Should have the <head> section, and a starter of the <body>. It is advised that eventual CSS uses a special class for featureInfo, since the generated HTML may blend with another page changing its aspect when using generic classes like td, tr, and so on. --> <html> <head> <title>Geoserver GetFeatureInfo output</title> </head> <style type="text/css"> table.featureInfo, table.featureInfo td, table.featureInfo th { border:1px solid #ddd; border-collapse:collapse; margin:0; padding:0; font-size: 90%; padding:.2em .1em; } table.featureInfo th{ padding:.2em .2em; text-transform:uppercase; font-weight:bold; background:#eee; } table.featureInfo td{ background:#fff; } table.featureInfo tr.odd td{ background:#eee; } table.featureInfo caption{ text-align:left; font-size:100%; font-weight:bold; text-transform:uppercase; padding:.2em .2em; } </style> <body>
In content.ftl file enter the following HMTL:
<ul> <#list features as feature> <li><b>Type: ${type.name}</b> (id: <em>${feature.fid}</em>): <ul> <#list feature.attributes as attribute> <#if !attribute.isGeometry> <li>${attribute.name}: ${attribute.value}</li> </#if> </#list> </ul> </li> </#list> </ul>
In footer.ftl file enter the following HMTL:
<#-- Footer section of the GetFeatureInfo HTML output. Should close the body and the html tag. --> </body> </html>
Go to the Map Preview to show geosolutions:bplandmarks layer.
Click on the Rocky Mountain Natl Park region in the OpenLayers map to show the new FeatureInfo representation.
Custom GetFeatureInfo template
Cross layer filtering with GeoServer¶
Normal GeoServer operation allows a filter to be applied on each layer in isolation, based on its attribute and external information (geometry, values) provided by the user. Cross layer filtering is instead the ability to select features from one layer that bear some relationship with features coming from another layer. Common questions that cross layer filters can help answering are:
- find all the ice cream stores located in a public park (point vs polygon)
- find all bus stops within 100m from the National Bank subsidiaries (point vs point, with distance reference)
- find all coastal roads (line VS polygon, assuming we have a set of polygons representing the water areas)
In order to solve these questions with a vanilla GeoServer a client would have to first use WFS to gather all the geometries satisfying the base conditions (e.g., find the National Bank Subsidiaries), load and unite them, and then issue a second request to the server in order to get the data from the other layer (e.g., the bus stops within 100m from the previously loaded points).

Round trips without cross layer filtering
The querylayer module¶
The querylayer extension, already installed in the workshop GeoServer instance, provides three new filter functions that can be used to avoid the client/server extra round trips, and have the server handle the secondary geometries collection instead.
Name | Arguments | Description |
querySingle | layer : String, attribute :String, filter :String |
Queries the specified layer``applying the specified (E)CQL ``filter and returns the value of attribute from the first feature in the result set. The layer name should be qualified (e.g. topp:states ), the filter can be INCLUDE if no filtering is desired |
queryCollection | layer : String, attribute :String, filter :String |
Queries the specified layer``applying the specified (E)CQL ``filter and returns the list of the values from attribute out of every single feature in the result set. The layer name should be qualified (e.g. topp:states ), the filter can be INCLUDE if no filtering is desired. Will throw an exception if too many results are being collected (see the memory limits section for details) |
collectGeometries | geometries : a list of Geometry objects |
Turns the list of geometries into a single Geometry object, suitable for being used as the reference geometry in spatial filters. Will throw an exception if too many coordinates are being collected (the results of queryCollection cannot be used as is) |
These filter functions can be used directly in CQL filters, OGC filters and SLD, meaning they are available both from WMS and WFS.
Finding all polygonal landmarks crossing a trail¶
The following map , obtained using the WMS reflector to keep the URL short, shows all polygonal landmarks and trails in Boulder (trails are visible when zooming-in due to scale dependencies):
http://localhost:8083/geoserver/geosolutions/wms/reflect?layers=geosolutions:bplandmarks,Trails&format=application/openlayers&width=512&height=512&BBOX=-105.31,39.97,-105.26,40.2

Polygonal landmarks and trails in Boulder
Now, let’s assume we want to find all polygonal landmarks crossing any trail using the above filter functions.
The first step would be to locate all the trails and extract their geometry attribute (the_geom
):
queryCollection('Trails', 'the_geom', 'INCLUDE')
The above builds a list of geometries that we want to turn into a single MULTILINESTRING
, in order to use it as a reference for a INTERSECTS
filter. So we’ll call collectGeometries
:
collectGeometries(queryCollection('Trails', 'the_geom', 'INCLUDE'))
Now that we have all the trails in a single geometry object we can use it to build a intersection filter with the polygonal landmarks:
INTERSECTS(the_geom, collectGeometries(queryCollection('Trails', 'the_geom', 'INCLUDE')))
Since the map contains two layers and we only want to filter on the first, the final CQL filter used in the GetMap request will be:
INTERSECTS(the_geom, collectGeometries(queryCollection('Trails', 'the_geom', 'INCLUDE')));INCLUDE
The result is that only two polygonal landmarks, the Boulder Mountain Park, and the smaller Buckingham Park, cross any trail:

Polygonal landmarks intersecting trails in Boulder
Finding all buildings located inside a park¶
In this case we’ll start with this map:
http://localhost:8083/geoserver/geosolutions/wms/reflect?layers=geosolutions:bplandmarks,bbuildings&format=application/openlayers&width=512&height=512&&BBOX=-105.29,40.01,-105.28,40.02

Buildings and parks in Boulder
The filter construction is similar to the previous case, but this time we need to collect geometries only from parks, which have a MTFCC
attribute equals to K2180
:
INCLUDE;INTERSECTS(the_geom, collectGeometries(queryCollection('bplandmarks', 'the_geom', 'MTFCC = ''K2180''')))

Buildings inside parks in Boulder
Finding all buildings close enough to the Boulder County Courthouse¶
In this case we want to find all the buildings close to the Boulder County Courthouse. The reference map this time is:
http://localhost:8083/geoserver/geosolutions/wms/reflect?layers=geosolutions:bptlandmarks,bbuildings&format=application/openlayers&width=512&height=512&&BBOX=-105.28061758059,40.016146865234,-105.27475307863,40.021151240234

Boulder County Courthouse surrounded by buildings
This will extract a single geometry that we’ll use as a reference, so this time we are going to use the querySingle
function instead, and use the DWITHIN
function to locate all buildings within 400 feet from the courthouse:
INCLUDE;DWITHIN(the_geom, querySingle('bptlandmarks', 'the_geom', 'FULLNAME = ''Boulder County Courthouse'''), 400, feet)
and the resulting map is going to be:

Buildings close to the Boulder County Courthouse
Advanced Raster Data Management¶
Introduction To Processing With GDAL Utilities¶
In the Adding a GeoTiff section, a GeoTIFF file has been added to GeoServer as is. However, it’s common practice to do a preliminary analysis on the available data and, if needed, optimize it since configuring big datasets without proper pre-processing, may result in low performance when accessing them. In this section, instructions about how to do data optimization will be provided by introducing some FWTools Utilities.
Note
On a Windows machine you can set-up a shell with all GDAL Utilities opening a terminal and running the file setenv.bat
under the %TRAINING_ROOT% folder. This operation must repeated whenever a new terminal window is open. Alternatively run directly the file gdal.bat
under the %TRAINING_ROOT% folder.
gdalinfo¶
This utility allows to get several info from the GDAL library, for instance, specific Driver capabilities and input Datasets/Files properties.
Being GeoTIFF a widely adopted geospatial format, it’s useful to get information about the GDAL GeoTIFF’s Driver capabilities using the command:
gdalinfo --format GTIFF
This is only a trimmed down version of a typical output:
Format Details:
Short Name: GTiff
Long Name: GeoTIFF
Extension: tif
Mime Type: image/tiff
Help Topic: frmt_gtiff.html
Supports: Create() - Create writeable dataset.
Supports: CreateCopy() - Create dataset by copying another.
Supports: Virtual IO - eg. /vsimem/
Creation Datatypes: Byte UInt16 Int16 UInt32 Int32 Float32 Float64 CInt16 CInt32 CFloat32 CFloat64
<CreationOptionList>
<Option name="COMPRESS" type="string-select">
<Value>NONE</Value>
<Value>LZW</Value>
<Value>PACKBITS</Value>
<Value>JPEG</Value>
<Value>CCITTRLE</Value>
<Value>CCITTFAX3</Value>
<Value>CCITTFAX4</Value>
<Value>DEFLATE</Value>
</Option>
<Option name="PREDICTOR" type="int" description="Predictor Type" />
<Option name="JPEG_QUALITY" type="int" description="JPEG quality 1-100" default="75"/>
<Option name="ZLEVEL" type="int" description="DEFLATE compression level 1-9" default="6" />
<Option name="LZMA_PRESET" type="int" description="LZMA compression level 0(fast)-9(slow)" default="6" />
<Option name="NBITS" type="int" description="BITS for sub-byte files (1-7), sub-uint16 (9-15), sub-uint32 (17-31)" />
<Option name="INTERLEAVE" type="string-select" default="PIXEL">
<Value>BAND</Value>
<Value>PIXEL</Value>
</Option>
<Option name="TILED" type="boolean" description="Switch to tiled format"/>
<Option name="TFW" type="boolean" description="Write out world file"/>
<Option name="RPB" type="boolean" description="Write out .RPB (RPC) file" />
<Option name="BLOCKXSIZE" type="int" description="Tile Width"/>
<Option name="BLOCKYSIZE" type="int" description="Tile/Strip Height"/>
<Option name="PHOTOMETRIC" type="string-select">
<Value>MINISBLACK</Value>
<Value>MINISWHITE</Value>
<Value>PALETTE</Value>
<Value>RGB</Value>
<Value>CMYK</Value>
<Value>YCBCR</Value>
<Value>CIELAB</Value>
<Value>ICCLAB</Value>
<Value>ITULAB</Value>
</Option>
<Option name="SPARSE_OK" type="boolean" description="Can newly created files have missing blocks?" default="FALSE" />
<Option name="ALPHA" type="boolean" description="Mark first extrasample as being alpha" />
<Option name="PROFILE" type="string-select" default="GDALGeoTIFF">
<Value>GDALGeoTIFF</Value>
<Value>GeoTIFF</Value>
<Value>BASELINE</Value>
</Option>
<Option name="PIXELTYPE" type="string-select">
<Value>DEFAULT</Value>
<Value>SIGNEDBYTE</Value>
</Option>
<Option name="BIGTIFF" type="string-select" description="Force creation of BigTIFF file">
<Value>YES</Value>
<Value>NO</Value>
<Value>IF_NEEDED</Value>
<Value>IF_SAFER</Value>
</Option>
<Option name="ENDIANNESS" type="string-select" default="NATIVE" description="Force endianness of created file. For DEBUG purpose mostly">
<Value>NATIVE</Value>
<Value>INVERTED</Value>
<Value>LITTLE</Value>
<Value>BIG</Value>
</Option>
<Option name="COPY_SRC_OVERVIEWS" type="boolean" default="NO" description="Force copy of overviews of source dataset (CreateCopy())" />
</CreationOptionList>
- From the above list of create options it’s possible to determine the main GeoTIFF Driver’s writing capabilities:
- COMPRESS: customize the compression to be used when writing output data
- JPEG_QUALITY: specify a quality factor to be used by the JPEG compression
- TILED: When set to YES it allows to tile output data
- BLOCKXSIZE, BLOCKYZISE: Specify the Tile dimension width and Tile dimension height
- PHOTOMETRIC: Specify the photometric interpretation of the data
- PROFILE: Specify the GeoTIFF profile to be used (some profiles only support a minimal set of TIFF Tags while some others provide a wider range of Tags)
- BIGTIFF: Specify when to write data as BigTIFF (A TIFF format which allows to break the 4GB Offset boundary)
The following instructions allow you to get information about the sample dataset previously configured in GeoServer.
Run:
* Linux:: cd ${TRAINING_ROOT}/data/user_data/aerial gdalinfo 13tde815295_200803_0x6000m_cl.tif * Windows:: setenv.bat cd %TRAINING_ROOT%\data\user_data\aerial\ gdalinfo 13tde815295_200803_0x6000m_cl.tif
Part of the gdalinfo output on a sample dataset
Check the Block info as well as the Overviews info if present.
- Block: It represents the internal tiling. Notice that the sample dataset has tiles made of 16 rows having width equals to the full image width.
- Overviews: It provides information about the underlying overviews. Notice that the sample dataset doesn’t have overviews since the Overviews property is totally missing from the gdalinfo output.
gdal_translate¶
This utility allows to convert a dataset to a different format by allowing a wide set of parameters to customize the conversion.
Running the command:
gdal_translate
allows to get the list of supported parameters as well as the supported output formats:
Usage: gdal_translate [--help-general]
[-ot {Byte/Int16/UInt16/UInt32/Int32/Float32/Float64/
CInt16/CInt32/CFloat32/CFloat64}] [-strict]
[-of format] [-b band] [-mask band] [-expand {gray|rgb|rgba}]
[-outsize xsize[%] ysize[%]]
[- unscale] [-scale [src_min src_max [dst_min dst_max]]]
[-srcwin xoff yoff xsize ysize] [-projwin ulx uly lrx lry]
[-a_srs srs_def] [-a_ullr ulx uly lrx lry] [-a_nodata value]
[-gcp pixel line easting northing [elevation]]*
[-mo "META-TAG=VALUE"]* [-q] [-sds]
[-co "NAME=VALUE"]* [-stats]
src_dataset dst_dataset
- Where the meaning of the main parameters is summarized below:
- -ot: allows to specify the output datatype (Make sure that the specified datatype is contained in the Creation Datatypes list of the Writing driver)
- -of: specify the desired output format (GTIFF is the default value)
- -b: allows to specify an input band to be written in the output file. (Use multiple -b option to specify more bands)
- -mask: allows to specify an input band to be write an output dataset mask band.
- -expand: allows to expose a dataset with 1 band with a color table as a dataset with 3 (rgb) or 4 (rgba) bands. The (gray) value allows to expand a dataset with a color table containing only gray levels to a gray indexed dataset.
- -outsize: allows to set the size of the output file in terms of pixels and lines unless the % sign is attached in which case it’s as a fraction of the input image size.
- -unscale: allows to apply the scale/offset metadata for the bands to convert from scaled values to unscaled ones.
- -scale: allows to rescale the input pixels values from the range src_min to src_max to the range dst_min to dst_max. (If omitted the output range is 0 to 255. If omitted the input range is automatically computed from the source data).
- -srcwin: allows to select a subwindow from the source image in terms of xoffset, yoffset, width and height
- -projwin: allows to select a subwindow from the source image by specifying the corners given in georeferenced coordinates.
- -a_srs: allows to override the projection for the output file. The srs_def may be any of the usual GDAL/OGR forms, complete WKT, PROJ.4, EPSG:n or a file containing the WKT.
- -a_ullr: allows to assign/override the georeferenced bounds of the output file.
- -a_nodata: allows to assign a specified nodata value to output bands.
- -co: allows to set a creation option in the form “NAME=VALUE” to the output format driver. (Multiple -co options may be listed.)
- -stats: allows to get statistics (min, max, mean, stdDev) for each band
- src_dataset: is the source dataset name. It can be either file name, URL of data source or subdataset name for multi*-dataset files.
- dst_dataset: is the destination file name.
The following steps provide instructions to tile the sample dataset previously configured in GeoServer, by using the GeoTiff driver.
- Create a directory to store the converted data:
Linux:
cd ${TRAINING_ROOT}/data/user_data mkdir retiledWindows:
cd %TRAINING_ROOT%\data\user_data mkdir retiled
- Convert the input sample data to an output file having tiling set to 512x512. Run:
Linux:
gdal_translate -co "TILED=YES" -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" aerial/13tde815295_200803_0x6000m_cl.tif retiled/13tde815295_200803_0x6000m_cl.tifWindows:
gdal_translate -co "TILED=YES" -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" aerial\13tde815295_200803_0x6000m_cl.tif retiled\13tde815295_200803_0x6000m_cl.tif
- Optionally, check that the output dataset have been successfully tiled, by running the command:
Linux:
gdalinfo retiled/13tde815295_200803_0x6000m_cl.tifWindows:
gdalinfo retiled\13tde815295_200803_0x6000m_cl.tif![]()
Part of the gdalinfo output on the tiled dataset. Notice the Block value now is 512x512
gdaladdo¶
This utility allows to add overviews to a dataset. The following steps provide instructions to add overviews to the tiled sample dataset.
Running the command:
gdaladdo
allows to get the list of supported parameters:
Usage: gdaladdo [-r {nearest,average,gauss,average_mp,average_magphase,mode}]
[-ro] [--help-general] filename levels
- Where the meaning of the main parameters is summarized below:
- -r: allows to specify the resampling algorithm (Nearest is the default value)
- -ro: allows to open the dataset in read-only mode, in order to generate external overview (for GeoTIFF especially)
- filename: represents the file to build overviews for.
- levels: allows to specify a list of overview levels to build.
- Run:
Linux:
cd ${TRAINING_ROOT}/data/user_data/retiled gdaladdo -r average 13tde815295_200803_0x6000m_cl.tif 2 4 8 16 32Windows:
cd %TRAINING_ROOT%\data\user_data\retiled gdaladdo -r average 13tde815295_200803_0x6000m_cl.tif 2 4 8 16 32to add 5 levels of overviews having 2,4,8,16,32 subsampling factors applied to the original image resolution respectively.
Optionally, check that the overviews have been added to the dataset, by running the command:
gdalinfo 13tde815295_200803_0x6000m_cl.tif
Part of the gdalinfo output on the tiled dataset with overviews. Notice the Overviews properties
Process in bulk¶
Instead of manually repeating these 2 steps (retile + add overviews) for each file, we can invoke a few commands to get it automated.
- Run:
Linux:
cd ${TRAINING_ROOT}/data/user_data mkdir optimized cd aerial for i in `find *.tif`; do gdal_translate -CO "TILED=YES" -CO "BLOCKXSIZE=512" -CO "BLOCKYSIZE=512" $i ../optimized/$i; gdaladdo -r average ../optimized/$i 2 4 8 16 32; doneWindows:
cd %TRAINING_ROOT%\data\user_data\ mkdir optimized cd aerial notepad optimize.batwill be open a text editor. Copy the following content:
for %%F in (*.tif) do ( echo Processing file %%F REM translate echo Performing gdal_translate on file %%F to file %%~nF.tiff gdal_translate -co "TILED=YES" -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" -co "COMPRESS=DEFLATE" %%F ..\optimized\%%~nF.tiff REM add overviews echo Adding overviews on file %%~nF.tiff gdaladdo -r average --config COMPRESS_OVERVIEW DEFLATE ..\optimized\%%~nF.tiff 2 4 8 16 32 )Then save the file and run the created .bat file:
optimize.bat
You should see a list of run like this:
... Input file size is 2500, 2500 0...10...20...30...40...50...60...70...80...90...100 - done. 0...10...20...30...40...50...60...70...80...90...100 - done. Input file size is 2500, 2500 0...10...20...30...40...50...60...70...80...90...100 - done. 0...10...20...30...40...50...60...70...80...90...100 - done. Input file size is 2500, 2500 0...10...20...30...40...50...60...70...80...90...100 - done. 0...10...20...30...40...50...60...70...80...90...100 - done. ...
Warning
This process can take some seconds.
At this point optimized datasets have been prepared and they are ready to be served by GeoServer as an ImageMosaic.
gdalwarp¶
This utility allows to warp and reproject a dataset. The following steps provide instructions to reproject the aerial dataset (which has “EPSG:26913” coordinate reference system) to WGS84 (“EPSG:4326”).
Running the command:
gdalwarp
allows to get the list of supported parameters:
Usage: gdalwarp [--help-general] [--formats]
[-s_srs srs_def] [-t_srs srs_def] [-to "NAME=VALUE"]
[-order n | -tps | -rpc | -geoloc] [-et err_threshold]
[-refine_gcps tolerance [minimum_gcps]]
[-te xmin ymin xmax ymax] [-tr xres yres] [-tap] [-ts width height]
[-wo "NAME=VALUE"] [-ot Byte/Int16/...] [-wt Byte/Int16]
[-srcnodata "value [value...]"] [-dstnodata "value [value...]"] -dstalpha
[-r resampling_method] [-wm memory_in_mb] [-multi] [-q]
[-cutline datasource] [-cl layer] [-cwhere expression]
[-csql statement] [-cblend dist_in_pixels] [-crop_to_cutline]
[-of format] [-co "NAME=VALUE"]* [-overwrite]
srcfile* dstfile
- Where the meaning of the main parameters is summarized below:
- -s_srs: allows to specify the source coordinate reference system
- -t_srs: allows to specify the target coordinate reference system
- -te: allows to set georeferenced extents (expressed in target CRS) of the output
- -tr: allows to specify the output resolution (expressed in target georeferenced units)
- -ts: allows to specify the output size in pixel and lines.
- -r: allows to specify the resampling method (one of near, bilinear, cubic, cubicspline and lanczos)
- -srcnodata: allows to specify band values to be excluded from interpolation.
- -dstnodata: allows to specify nodata values on output file.
- -wm: allows to specify the amount of memory (expressed in megabytes) used by the warping API for caching.
- Run:
Linux:
cd ${TRAINING_ROOT}/data/user_data/retiled gdalwarp -t_srs "EPSG:4326" -co "TILED=YES" 13tde815295_200803_0x6000m_cl.tif 13tde815295_200803_0x6000m_cl_warped.tifWindows:
cd %TRAINING_ROOT%/data/user_data/retiled gdalwarp -t_srs "EPSG:4326" -co "TILED=YES" 13tde815295_200803_0x6000m_cl.tif 13tde815295_200803_0x6000m_cl_warped.tifto reproject the specified aerial dataset to WGS84 coordinate reference system.
Optionally, check that reprojection has been successfull, by running the command:
gdalinfo 13tde815295_200803_0x6000m_cl_warped.tif
Part of the gdalinfo output on the warped dataset. Notice the updated Coordinate System property
In the next section, instructions to configure an ImageMosaic will be provided.
Advanced Mosaics and Pyramids Configuration¶
In this section will learn how to manage Image Mosaics and Image Pyramids in GeoServer.
Configuring an Image Mosaic¶
As introduced in a previous section an Image Mosaic is composed of a set of datasets which are exposed as a single coverage. The ImageMosaic format allows to automatically build and setup a mosaic from a set of georeferenced datasets. This section provides better instructions to configure an Image Mosaic
Note
Before you start, ensure that the Maps - Raster section has been completed.
We will configure an ImageMosaic using the optimized dataset. As explained in the Maps - Raster section, follow the steps 1 to 4, then at the step 5 fill the fields as explained below.
Specify a proper name (as an instance,
boulder_bg_optimized
) in the Data Source Name field of the interface.Specify
file:<TRAINING_ROOT>/data/user_data/optimized
as URL of the sample data in the Connections Parameter’s - URL field.Click Save.
Publish the layer by clicking on the publish link.
Set
boulder_bg_optimized
as name and title of the layer.Check the Coordinate Reference Systems and the Bounding Boxes fields are properly set.
Customize the ImageMosaic properties if needed. For the sample mosaic, set the OutputTransparentColor to the value 000000 (Which is the Black color). click on Save when done.
- AllowMultithreading: If true, enable tiles multithreading loading. This allows to perform parallelized loading of the granules that compose the mosaic.
- BackgroundValues: Set the value of the mosaic background. Depending on the nature of the mosaic it is wise to set a value for the no data area (usually -9999). This value is repeated on all the mosaic bands.
- Filter: Filter granules based on attributes from the input coverage.
- InputTransparentColor: Set the transparent color for the granules prior to mosaicking them in order to control the superimposition process between them. When GeoServer composes the granules to satisfy the user request, some of them can overlap some others, therefore, setting this parameter with the opportune color avoids the overlap of no data areas between granules.
- MaxAllowedTiles: Set the maximum number of the tiles that can be load simultaneously for a request. In case of a large mosaic this parameter should be opportunely set to not saturating the server with too many granules loaded at the same time.
- MergeBehavior: Merging behaviour for the various granules of the mosaic that GeoServer will produce. This parameter controls whether we want to merge in a single mosaic or stack all the bands into the final mosaic.
- OutputTransparentColor: Set the transparent color for the created mosaic.
- SORTING: Allow to specify the time order of the obtained granules set. Valid values are DESC (descending) or ASC (ascending). Note that it works just using DBMS as indexes.
- SUGGESTED_TILE_SIZE: Controls the tile size of the input granules as well as the tile size of the output mosaic. It consists of two positive integers separated by a comma, like 512,512.
- USE_JAI_IMAGEREAD: If true, GeoServer will make use of JAI ImageRead operation and its deferred loading mechanism to load granules; if false, GeoServer will perform direct ImageIO read calls which will result in immediate loading.
At this point the ImageMosaic is being published with GeoServer. Next step is checking how the performances in accessing the datasets have been improved.
Click the Layer Preview link in the left GeoServer menu.
Look for a geosolutions:boulder_bg layer (the dataset without optimization) and click the OpenLayers link beside of it.
Play with the map preview by zooming and panning. When zooming, the response time isn’t immediate due to the access to the underlying big datasets which haven’t been optimized.
Return to the Layer Preview page.
Look for a geosolutions:boulder_bg_optimized layer (the optimized dataset with tiling and overviews set) and click the OpenLayers link beside of it.
Play with the map preview by zooming and panning:
- Check how the performances have been improved leveraging on both overviews and tiling.
- Note the better image quality of the lowest resolution views, having used an average interpolation algorithm when creating the overviews.
Configuring an Image Pyramid¶
GeoServer can efficiently deal with large TIFF with overviews, as long as the TIFF is below the 2GB size limit. Once the image size goes beyond such limit it’s time to start considering an image pyramid instead. An image pyramid builds multiple mosaics of images, each one at a different zoom level, making it so that each tile is stored in a separate file. This comes with a composition overhead to bring back the tiles into a single image, but can speed up image handling as each overview is tiled, and thus a sub-set of it can be accessed efficiently (as opposed to a single GeoTIFF, where the base level can be tiled, but the overviews never are).
Note
In order to build the pyramid we’ll use the gdal_retile.py utility, part of the GDAL command line utilities and available for various operating systems.
- Navigate to the workshop directory and create the bmpyramid directory into the <TRAINING_ROOT>\data\user_data directory
- From the command line run
Linux:
cd $TRAINING_ROOT/data/user_data mkdir bmpyramid gdal_retile.py -v -r bilinear -levels 4 -ps 2048 2048 -co "TILED=YES" -co "COMPRESS=JPEG" -targetDir bmpyramid bmreduced.tiffWindows:
cd %TRAINING_ROOT% cd %TRAINING_ROOT%\data\user_data\ mkdir bmpyramid gdal_retile -v -r bilinear -levels 4 -ps 2048 2048 -co "TILED=YES" -co "COMPRESS=JPEG" -targetDir bmpyramid bmreduced.tiffThe gdal_retile.py user guide provides a detailed explanation for all the possible parameters, here is a description of the ones used in the command line above:
- -v: verbose output, allows the user to see each file creation scroll by, thus knowing progress is being made (a big pyramid construction can take hours)
- -r bilinear: use bilinear interpolation when building the lower resolution levels. This is key to get good image quality without asking GeoServer to perform expensive interpolations in memory
- -levels 4: the number of levels in the pyramid
- -ps 2048 2048: each tile in the pyramid will be a 2048x2048 GeoTIFF
- -co “TILED=YES”: each GeoTIFF tile in the pyramid will be inner tiled
- -co “COMPRESS=JPEG”: each GeoTIFF tile in the pyramid will be JPEG compressed (trades small size for higher performance, try out it without this parameter too)
- -targetDir bmpyramid: build the pyramid in the bmpyramid directory. The target directory must exist and be empty
- bmreduced.tiff: the source file
This will produce a number of TIFF files in bmpyramid along with the sub-directories 1, 2, 3, and 4.
Go to the Stores section an add a new
Raster Data Source
clicking on ImagePyramid:Adding a ImagePyramid Data Source
Warning
This assumes the GeoServer image pyramid plug-in is already installed. The pyramid is normally an extension.
If the ImagePyramid store is not avaiable, before doing the exercise install the geoserver pyramid plugin from
%TRAINING_ROOT%/data/plugins/
. Just decompress the zip file into%TRAINING_ROOT%/webapps/geoserver/WEB-INF/lib/
and restart GeoServer.Specify a proper name (
bm_pyramid
) in the Data Source Name field of the interface and specify a proper URL with the pyramid data directory
Windows:
file:%TRAINING_ROOT%/data/user_data/bmpyramidLinux:
file:/home/geosolutions/Desktop/geoserver_training/data/user_data/bmpyramid![]()
Configuring a image pyramid store
Click the Save button.
Note
When clicking save the store will look into the directory, recognize a
gdal_retile
generated structure and perform some background operations:- move all tiff files in the root to a newly create directory 0 - create an image mosaic in all sub-directories (shapefile index plus property file) - create the root property file describing the whole pyramid structure
Publish the new pyramid created:
Choosing the coverage for publishing
Setup the layer parameter USE_JAI_IMAGEREAD to false to get better scalability: as told before the image loading using imageread is done using the JAI deferred mode so the data will be really loaded when are needed. This may cause many idle open ImageReaders, in case of having to deal with big pyramids (lots of granules over many levels) and it could cause performances issues.
Tuning the pyramid parameters
Click Submit button and go to the GeoServer Map Preview to see the pyramid:
Previewing the pyramid
Using the ImageMosaic plugin with footprint management¶
Introduction¶
This section describes how to associate a vector footprint to a raster dataset in GeoServer using the ImageMosaic plugin.
A vector footprint is a shape used as a mask for the mosaic. Masking can be useful for hiding pixels which are meaningless, or for enhancing only few regions of the image in respect to others.
This chapter contains two sub-sections:
- The first sub-section, Configuration, describes the possible configurations needed to set up an ImageMosaic with footprint.
- The second sub-section, Examples, provides examples of configuration of an ImageMosaic with footprint.
Configuration¶
A vector footprint can be linked to an Imagemosaic in three different ways:
- By using for each mosaic granule a Sidecar File, a Shapefile with the same filename of the granule which contains the footprint for it;
- By using a single Shapefile called footprints.shp which contains all the footprints for each granule; each feature contained in the shapefile represent a footprint for an Imagemosaic granule. Each footprint is associated to a granule with the location attribute;
- By using a file called footprints.properties, this option add more flexibility to the option number 2.
The last option allows to write the following configuration inside the footprints.properties file:
footprint_source=*location of the Shapefile*
footprint_filter=*filter on the Shapefile searching for the attribute associated to each granule*
So the user is free to decide the Shapefile name to use (not only footprints.shp) and the attribute to use for the footprint granule association adding also a custom filter for the content of that attribute.
The footprint.properties can be used also to hold other kind of configurations, see the Inset Support paragraph below.
For example if a Shapefile called fakeShapeFile stores the various footprints in a table like this, where each Name attribute is referred to a granule file:
And the associated granules are:
- ortho_1-1_1n_s_la087_2010_1.tif
- ortho_2-2_1n_s_la075_2010_1.tif
- ortho_1-1_1n_s_la103_2010_1.tif
- and so on …
The associated footprints.properties file must be like this:
footprint_source=fakeShapeFile.shp
footprint_filter=Name=strSubstring(granule.location, 0, strLength(granule.location) - 4)
The substring operation is done for comparing the footprint attribute names and the granule names without the .tif extension. Standard GeoServer Filter Functions can be use in this expression. A complete reference for them can be found here.
There are three possible behaviours for Footprint:
- None: simply doesn’t use the Footprint and behaves like a standard ImageMosaic layer;
- Transparent: adds an alpha band of 0s on the image portions outside of the Footprint making them transparent, typically used for RGB data;
- Cut: set the background value on the image portions outside of the Footprint, typically used for GrayScale data.
The behaviour must be set directly on the Layer configuration page.
Another feature of the Footprint is the possibility to calculate an Inset on the vector footprint prior to applying it to the image. With the Inset a shrinking of the footprint is applied, typically for removing compression artefacts or any nasty effect at the borders. The inset can be activated by adding the following code inside footprints.properties:
footprint_inset=*value in the shapefile u.o.m.*
footprint_inset_type=*full/border*
* **Full** inset type calculates the inset for each footprint side
* **Border** does the same operation but those straight lines that overlap the image bounds are avoided; this last parameter is useful for images already cut in a regular grid.
Each modification of the footprints.properties file requires to Reload GeoServer. This operation can be achieved by going to Server Status and clicking on the Reload button on the bottom-right side of the page.
Examples¶
The two datasets used in the following can be found into
- Linux:
$TRAINING_ROOT/data/user_data/footprint_data
- Windows
%TRAINING_ROOT%\data\user_data\footprint_data
The content of the footprint_data is:
- The first dataset, mosaic_single_tiff, contains a Shapefile called srtm_boulder.shp which represents a mask to use on the Boulder (Colorado) layer inside the
$TRAINING_ROOT/data/user_data/boulder
folder and can be used for testing footprint configuration with a Sidecar File.- The second dataset, mosaic_sample, is a mosaic which represents Italy and is used for testing the other two configurations.
Here are presented a few steps for configuring a new ImageMosaic layer with footprint.
Here the steps to load am Imagemosaic with a sidecar file as a vector footprint.
As seen in a previous chapter an ImageMosaic data store can be created by going to
.Load the mosaic_single_tiff folder, from the TRAINING_ROOT folder navigate to \data\user_data\footprint_data\mosaic_single_tiff

Publish a Layer from that store going to publish button.
, choosing the name of the data store created above and then clicking on theWarning
fill the field Declared CRS with the value EPSG:4326
if the CRS is not automatically set.
The layer will be rendered depending on the value of the FootprintBehavior field:
The user can set one of the three values for the Footprint behaviour as described above (None, Transparent, Cut).
After that, the user must confirm the modification by clicking on the Save button on the bottom side of the page.
Here are presented the results for each dataset.
This is an example of mosaic (mosaic_single_tiff) without applying Footprint:
And this is the result of setting FootprintBehavior to Cut:
Then navigate the filesystem in the mosaic directory, open (or create it if not exist) the file footprints.properties and write:
footprint_inset=0.01
footprint_inset_type=full
to add an inset.
Note
Remember that each modification on footprints.properties requires a GeoServer catalog and a GeoServer resource cache reloading in order to apply the changes.
If an Inset is added, the final mosaic is:
Repeat the steps described above but loading the mosaic_sample folder from %TRAINING_ROOT%\data\user_data\footprint_data\mosaic_sample
This is another example of mosaic (mosaic_sample) without Footprint:
And now after setting FootprintBehavior to Transparent (no Inset is used) on the Layer:
For testing this functionality the user must
Clone the directory
%TRAINING_ROOT%\data\user_data\footprint_data\mosaic_sample
and call itmosaic_sample2
Rename all the footprints.xxx files that compose the shapefile to mask.xxx (don’t rename footprints.properties too!) and load another ImageMosaic datastore.
open (create if not exist) the footprints.properties file and write:
footprint_source=mask.shp footprint_inset=0.00001 footprint_inset_type=borderIn order to specify an inset and use a footprint shapefile with a custom name.
- Then publish the imagemosaic located in the cloned directory called
mosaic_sample2
The result of setting FootprintBehavior to Transparent, Inset type to border and Inset value to 0.00001 is:
Advanced Processing With GDAL Utilities¶
In this section we are including some advanced examples of processing of Raster Data for GeoServer serving using GDAL Utilities. Here below you can find a list of examples.
Example n° 1: Serving a large number of GrayScale GeoTiff with Palette¶
In this example there is a group of Gray GeoTiff images. The purpose of this section is to describe how to merge these images using GDAL. These data are taken from the Regione Marche Cartographic Portal.
Note
Data have the same pixel resolution and same Coordinate Reference System EPSG:26592.
- Navigate to the workshop directory
$TRAINING_ROOT/data/user_data/gdal_processing_data
(on Windows%TRAINING_ROOT%\data\user_data\gdal_processing_data
) and find the grayscale_data directory. - Navigate inside the grayscale_data directory with the SDKshell.
Note
The following operations must be executed from the shell inside the selected directory. In Windows, run setenv.bat if not already launched.
Calling the gdalinfo command on an image for retrieving the associated information:
gdalinfo 32501_.tif
And the result is:
Driver: GTiff/GeoTIFF Files: 32501_.tif 32501_.tfw Size is 5494, 4526 Coordinate System is `' Origin = (2356751.582169299000000,4762684.428062002200000) Pixel Size = (1.269090000000000,-1.269090000000000) Metadata: TIFFTAG_RESOLUTIONUNIT=2 (pixels/inch) TIFFTAG_XRESOLUTION=1200 TIFFTAG_YRESOLUTION=1200 Image Structure Metadata: COMPRESSION=LZW INTERLEAVE=BAND Corner Coordinates: Upper Left ( 2356751.582, 4762684.428) Lower Left ( 2356751.582, 4756940.527) Upper Right ( 2363723.963, 4762684.428) Lower Right ( 2363723.963, 4756940.527) Center ( 2360237.772, 4759812.477) Band 1 Block=5494x1 Type=Byte, ColorInterp=Palette Color Table (RGB with 256 entries) 0: 0,0,0,255 1: 1,1,1,255 2: 2,2,2,255 ~ 254: 254,254,254,255 255: 255,255,255,255
From gdalinfo it is possible to note:
- No CRS definition. An image without CRS cannot be displayed on GeoServer.
- Tiles Striped (5494x1).
- LZW Compression.
- ColorInterpretation as a Palette.
Using gdal_translate it is possible to change the ColorInterpretation from Palette to Gray.:
gdal_translate -expand gray -a_srs EPSG:26592 -of vrt $f ${f:0:6}.vrt
The final image format is not GeoTiff but VRT. This format simply creates an XML file containing information about the operation to perform on the image; the output image is created only when the image must be shown to the screen. The CRS is set with the -a_srs parameter. The color interpretation can be set to gray because each palette value is equal to a grayscale value (this last step is optional).
Note
The expand gray option does not create a multi banded image but only one band is present.
Note
In future a possible operation could be the processing of the input image with the color interpretation set to gray and the calculation of the optimal palette on the final image.
For executing the same operation on all the input images a script called script.sh (Linux) or script.bat (Windows) must be created and executed:
Note
In order to edit the scripts use the basic notepad editor on Windows or gedit on Linux. Remember that on Linux, after the script creation, it must be marked as executable with the command
chmod +x <nome_script>.sh
Linux:
#!/bin/bash FILES="*.tif" echo start for f in $FILES do echo $f gdal_translate -expand gray -a_srs EPSG:26592 -of vrt $f ${f:0:6}.vrt done echo stop
Windows:
for /R %%f in (*.tif) do ( gdal_translate -expand gray -a_srs EPSG:26592 -of vrt %%~f %%~f.vrt )
Creating a list of the VRT files:
ls *.vrt > list.txt (Linux) or dir /b *.vrt > list.txt (Windows)
Merging of all the input files with the gdalbuildvrt command:
gdalbuildvrt -srcnodata 255 -vrtnodata 255 -resolution highest -input_file_list list.txt merged_vrt.vrt
Parameters used:
- -srcnodata 255 -vrtnodata 255 : setting of the input and output image No Data.
- -resolution highest : selection of the highest image resolution.
- -input_file_list list.txt : definition of the input file list.
The result of calling gdalinfo on the output image is:
Driver: VRT/Virtual Raster Files: merged_vrt.vrt 32501_.vrt ~ 32507_.vrt Size is 16342, 9157 Coordinate System is: PROJCS["Monte Mario (Rome) / Italy zone 2 (deprecated)", GEOGCS["Monte Mario (Rome)", DATUM["Monte_Mario_Rome", SPHEROID["International 1924",6378388,297, AUTHORITY["EPSG","7022"]], TOWGS84[-104.1,-49.1,-9.9,0.971,-2.917,0.714,-11.68], AUTHORITY["EPSG","6806"]], PRIMEM["Rome",12.45233333333333, AUTHORITY["EPSG","8906"]], UNIT["degree",0.0174532925199433, AUTHORITY["EPSG","9122"]], AUTHORITY["EPSG","4806"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",2.54766666666666], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",2520000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AXIS["X",EAST], AXIS["Y",NORTH], AUTHORITY["EPSG","26592"]] Origin = (2356629.695870598300000,4762684.428062002200000) Pixel Size = (1.267290000000000,-1.267290000000000) Corner Coordinates: Upper Left ( 2356629.696, 4762684.428) ( 0d32'36.59"E, 42d59'54.65"N) Lower Left ( 2356629.696, 4751079.854) ( 0d32'48.78"E, 42d53'38.68"N) Upper Right ( 2377339.749, 4762684.428) ( 0d47'50.77"E, 43d 0' 9.65"N) Lower Right ( 2377339.749, 4751079.854) ( 0d48' 1.42"E, 42d53'53.63"N) Center ( 2366984.722, 4756882.141) ( 0d40'19.38"E, 42d56'54.40"N) Band 1 Block=128x128 Type=Byte, ColorInterp=Gray NoData Value=255
Transforming from VRT to GeoTiff with gdal_translate:
gdal_translate -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" -co "TILED=YES" -co "BIGTIFF=YES" -co "COMPRESS=DEFLATE" merged_vrt.vrt merged_tif.tif
Warning
This operation might take many minutes.
Parameters used:
-co “BLOCKXSIZE=512” -co “BLOCKYSIZE=512” -co “TILED=YES” : setting tile dimensions.
-co “BIGTIFF=YES” -co “COMPRESS=DEFLATE” : (Optional) loss-less compression of the image for reducing the disk space occupation, similar to LZW.
Note
-co “BIGTIFF=YES” is used because GDAL is not automatically able to convert the GeoTiff image into a BigTiff if compression is set.
From gdalinfo:
Driver: GTiff/GeoTIFF Files: merged_tif.tif Size is 16342, 9157 Coordinate System is: PROJCS["Monte Mario (Rome) / Italy zone 2", GEOGCS["Monte Mario (Rome)", DATUM["Monte_Mario_Rome", SPHEROID["International 1924",6378388,297.0000000000014, AUTHORITY["EPSG","7022"]], TOWGS84[-104.1,-49.1,-9.9,0.971,-2.917,0.714,-11.68], AUTHORITY["EPSG","6806"]], PRIMEM["Rome",12.45233333333333], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4806"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",15], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",2520000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AUTHORITY["EPSG","26592"]] Origin = (2356629.695870598300000,4762684.428062002200000) Pixel Size = (1.267290000000000,-1.267290000000000) Metadata: AREA_OR_POINT=Area Image Structure Metadata: COMPRESSION=DEFLATE INTERLEAVE=BAND Corner Coordinates: Upper Left ( 2356629.696, 4762684.428) ( 12d59'44.99"E, 42d59'54.65"N) Lower Left ( 2356629.696, 4751079.854) ( 12d59'57.18"E, 42d53'38.68"N) Upper Right ( 2377339.749, 4762684.428) ( 13d14'59.17"E, 43d 0' 9.65"N) Lower Right ( 2377339.749, 4751079.854) ( 13d15' 9.82"E, 42d53'53.63"N) Center ( 2366984.722, 4756882.141) ( 13d 7'27.78"E, 42d56'54.40"N) Band 1 Block=512x512 Type=Byte, ColorInterp=Gray NoData Value=255
This image can be displayed on GeoServer but a further optimization step could bring to better performances. There could be two ways for optimizing the GeoServer performances:
- building image overviews.
- building a pyramid of the image.
(Optional) Optimization.
Building overview with gdaladdo:
gdaladdo -r cubicspline --config COMPRESS_OVERVIEW DEFLATE --config GDAL_TIFF_OVR_BLOCKSIZE 512 merged_tif.tif 2 4 8 16 32
Overviews are reduced views of the input image used by GeoServer for displaying the image at a lower resolutions.
Parameters used:
- -r cubicspline : setting the interpolation mode to cubicspline (by default is nearest-neighbour).
- –config COMPRESS_OVERVIEW DEFLATE : setting DEFLATE compression on the overviews, for reducing disk space occupation.
- –config GDAL_TIFF_OVR_BLOCKSIZE 512 : setting tile dimensions on overviews.
- 2 ~ 32 : setting overview level.
And with gdalinfo:
Band 1 Block=512x512 Type=Byte, ColorInterp=Gray NoData Value=255 Overviews: 8171x4579, 4086x2290, 2043x1145, 1022x573, 511x287
Then the result can be displayed in GeoServer by configuring the image as a GeoTiff (see Adding a GeoTiff section).
Building a pyramid through several gdalwarp invokations, each time by reducing the image resolution:
gdalwarp -r cubicspline -dstnodata 255 -srcnodata 255 -multi -tr 2,53458 -2,53458 -co BLOCKXSIZE=512 -co BLOCKYSIZE=512 -co TILED=YES -co COMPRESS=DEFLATE merged_tif.tif merged_tif_2.tif
Parameters used:
- -r cubicspline : definition interpolation method.
- -dstnodata 255 -srcnodata 255 : definition of the image input and output NO DATA.
- -multi : forcing to use multithreading.
- -tr 2,53458 -2,53458 : definition of the image resolutions.
Output image from gdalinfo:
Driver: GTiff/GeoTIFF Files: merged_tif_2.tif Size is 8171, 4578 Coordinate System is: ~ Band 1 Block=512x512 Type=Byte, ColorInterp=Gray NoData Value=255
After another gdalwarp on the output image:
gdalwarp -r cubicspline -dstnodata 255 -srcnodata 255 -multi -tr 5,06916 -5,06916 -co BLOCKXSIZE=512 -co BLOCKYSIZE=512 -co TILED=YES -co COMPRESS=DEFLATE merged_tif_2.tif merged_tif_4.tif
And gdalinfo:
Driver: GTiff/GeoTIFF Files: merged_tif_4.tif Size is 4085, 2289 Coordinate System is: ~ Band 1 Block=512x512 Type=Byte, ColorInterp=Gray NoData Value=255
The operations must be executed on the first image, then the same operation must be repeated on the output image and so on. This cycle allows to create a pyramid of images, each one with a lower resolution.
Then the result can be displayed in GeoServer by configuring the images as a pyramid (see Advanced Mosaic and Pyramid configuration section).
Displaying the result on GeoServer:
Result with gdaladdo
Result with ImagePyramid
Example n° 2: Serving a large number of DTM ASCII Grid Files¶
In this example there is a group of DTM images in ASCII Grid format. The purpose of this section is to describe how the GDAL commands may be used for merging the input files provided. These data are taken from Regione Calabria Open Data Portal at the ASCII - GRID section.
Note
Data have the same pixel resolution and same Coordinate Reference System EPSG:3003.
Warning
This example requires GDAL with Python bindings.
- Navigate to the workshop directory
$TRAINING_ROOT/data/user_data/gdal_processing_data
(on Windows%TRAINING_ROOT%\data\user_data\gdal_processing_data
) and find the DTM_data directory. - Navigate inside the DTM_data directory with the SDKshell.
Note
The following operations must be executed from the shell inside the selected directory. In Windows, run setenv.bat if not already launched.
Calling the gdalinfo command on an image for retrieving the associated information:
gdalinfo 521150.asc
And the result is:
Driver: AAIGrid/Arc/Info ASCII Grid Files: 521150.asc Size is 193, 154 Coordinate System is `' Origin = (2590740.000000000000000,4433860.000000000000000) Pixel Size = (40.000000000000000,-40.000000000000000) Metadata: AREA_OR_POINT=Point Corner Coordinates: Upper Left ( 2590740.000, 4433860.000) Lower Left ( 2590740.000, 4427700.000) Upper Right ( 2598460.000, 4433860.000) Lower Right ( 2598460.000, 4427700.000) Center ( 2594600.000, 4430780.000) Band 1 Block=193x1 Type=Float32, ColorInterp=Undefined NoData Value=-9999
From gdalinfo it is possible to note:
- No CRS definition. An image without CRS cannot be displayed on GeoServer.
- Tiles Striped (193x1).
- No Compression.
Listing of all the images into a single text list with the following command:
ls *.asc > list.txt (Linux) or dir /b *.asc > list.txt (Windows)
Merging of all the input files with the gdal_merge.py command:
gdal_merge.py -o merged.tif -co "TILED=YES" -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" -co "COMPRESS=DEFLATE" -co "ZLEVEL=9" -co "BIGTIFF=YES" -init -9999 -a_nodata -9999 -n -9999 -ot Float32 --optfile list.txt
Note
This command must be executed with python for avoiding import errors.
Parameters used:
-o merged.tif : definition of the output file name.
-co “TILED=YES” -co “BLOCKXSIZE=512” -co “BLOCKYSIZE=512” : definition of tile dimensions.
-co “COMPRESS=DEFLATE” -co “ZLEVEL=9” -co “BIGTIFF=YES” : definition of the compression mode.
Note
-co “BIGTIFF=YES” is used because GDAL is not automatically able to convert the GeoTiff image into a BigTiff if compression is set.
-init -9999 : initialization of the image pixels to NO DATA.
-a_nodata -9999 : definition of the output value for NO DATA.
-n -9999 : definition of the input pixel value to ignore during merging.
-ot Float32 : definition of the image output type.
–optfile list.txt : definition of the input file list.
The gdalinfo output on the merged image is:
Driver: GTiff/GeoTIFF Files: merged.tif Size is 3613, 6284 Coordinate System is `' Origin = (2570700.000000000000000,4445900.000000000000000) Pixel Size = (40.000000000000000,-40.000000000000000) Image Structure Metadata: COMPRESSION=DEFLATE INTERLEAVE=BAND Corner Coordinates: Upper Left ( 2570700.000, 4445900.000) Lower Left ( 2570700.000, 4194540.000) Upper Right ( 2715220.000, 4445900.000) Lower Right ( 2715220.000, 4194540.000) Center ( 2642960.000, 4320220.000) Band 1 Block=512x512 Type=Float32, ColorInterp=Gray NoData Value=-9999
The merged image has a good tiling(512x512) and compression, but the CRS is still undefined.
Setting of the image CRS with gdal_translate:
gdal_translate -a_srs "EPSG:3003" -co "TILED=YES" -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" -co "COMPRESS=DEFLATE" -co "ZLEVEL=9" -co "BIGTIFF=YES" merged.tif merged_CRS.tif
The various input parameters are maintained because by default GDAL do not compress the input image and set a bad tiling.
From gdalinfo:
Driver: GTiff/GeoTIFF Files: merged_CRS.tif Size is 3613, 6284 Coordinate System is: PROJCS["Monte Mario / Italy zone 1", GEOGCS["Monte Mario", DATUM["Monte_Mario", SPHEROID["International 1924",6378388,297.0000000000014, AUTHORITY["EPSG","7022"]], TOWGS84[-104.1,-49.1,-9.9,0.971,-2.917,0.714,-11.68], AUTHORITY["EPSG","6265"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4265"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",9], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",1500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AUTHORITY["EPSG","3003"]] Origin = (2570700.000000000000000,4445900.000000000000000) Pixel Size = (40.000000000000000,-40.000000000000000) Metadata: AREA_OR_POINT=Area Image Structure Metadata: COMPRESSION=DEFLATE INTERLEAVE=BAND Corner Coordinates: Upper Left ( 2570700.000, 4445900.000) ( 21d25'57.43"E, 39d29'28.80"N) Lower Left ( 2570700.000, 4194540.000) ( 21d 3'12.94"E, 37d16'39.68"N) Upper Right ( 2715220.000, 4445900.000) ( 23d 3'58.08"E, 39d18' 6.80"N) Lower Right ( 2715220.000, 4194540.000) ( 22d38'27.42"E, 37d 6' 9.29"N) Center ( 2642960.000, 4320220.000) ( 22d 2'40.73"E, 38d17'47.75"N) Band 1 Block=512x512 Type=Float32, ColorInterp=Gray NoData Value=-9999
This image can be displayed on GeoServer but a further optimization step could bring to better performances.
(Optional) Creation of the overviews associated to the merged image for having better throughput:
gdaladdo -r nearest --config COMPRESS_OVERVIEW DEFLATE --config GDAL_TIFF_OVR_BLOCKSIZE 512 merged_CRS.tif 2 4 8 16
Overviews are reduced views of the input image used by GeoServer for displaying the image at a lower resolutions.
Parameters used:
- -r nearest : definition of the interpolation method.
- –config COMPRESS_OVERVIEW DEFLATE : definition of the compression on overviews.
- –config GDAL_TIFF_OVR_BLOCKSIZE 512 : definition of the tile dimensions on overviews.
And with gdalinfo:
Driver: GTiff/GeoTIFF Files: merged_CRS.tif Size is 3613, 6284 Coordinate System is: PROJCS["Monte Mario / Italy zone 1", GEOGCS["Monte Mario", DATUM["Monte_Mario", SPHEROID["International 1924",6378388,297.0000000000014, AUTHORITY["EPSG","7022"]], TOWGS84[-104.1,-49.1,-9.9,0.971,-2.917,0.714,-11.68], AUTHORITY["EPSG","6265"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4265"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",9], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",1500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AUTHORITY["EPSG","3003"]] Origin = (2570700.000000000000000,4445900.000000000000000) Pixel Size = (40.000000000000000,-40.000000000000000) Metadata: AREA_OR_POINT=Area Image Structure Metadata: COMPRESSION=DEFLATE INTERLEAVE=BAND Corner Coordinates: Upper Left ( 2570700.000, 4445900.000) ( 21d25'57.43"E, 39d29'28.80"N) Lower Left ( 2570700.000, 4194540.000) ( 21d 3'12.94"E, 37d16'39.68"N) Upper Right ( 2715220.000, 4445900.000) ( 23d 3'58.08"E, 39d18' 6.80"N) Lower Right ( 2715220.000, 4194540.000) ( 22d38'27.42"E, 37d 6' 9.29"N) Center ( 2642960.000, 4320220.000) ( 22d 2'40.73"E, 38d17'47.75"N) Band 1 Block=512x512 Type=Float32, ColorInterp=Gray NoData Value=-9999 Overviews: 1807x3142, 904x1571, 452x786, 226x393
Then the result can be displayed in GeoServer by configuring the image as a GeoTiff (see Adding a GeoTiff section).
Displaying the result on GeoServer:
Example n° 3: Serving a large number of Cartographic Black/White GeoTiff with Palette¶
In this example there is a group of Cartographic Black/White images from “CARTA TECNICA DELLA REGIONE TOSCANA”. The purpose of this example is to describe how the GDAL commands may be used for merging the input files provided.
Note
Data have the same pixel resolution and same Coordinate Reference System EPSG:25832. Also each pixel is stored into single bit.
- Navigate to the workshop directory
$TRAINING_ROOT/data/user_data/gdal_processing_data
(on Windows%TRAINING_ROOT%\data\user_data\gdal_processing_data
) and find the CTR_data directory. - Navigate inside the CTR_data directory with the SDKshell.
Note
The following operations must be executed from the shell inside the selected directory. In Windows, run setenv.bat if not already launched.
Calling the gdalinfo command on an image for retrieving the associated information:
gdalinfo 20E27_1994.TIF
And the result is:
Driver: GTiff/GeoTIFF Files: 20E27_1994.TIF 20E27_1994.TFW Size is 16050, 14050 Coordinate System is `' GeoTransform = 600769.026848671, 0.1, 7.3789937e-007 4863785.940434861, -8.172141e-008, -0.1 Metadata: TIFFTAG_RESOLUTIONUNIT=2 (pixels/inch) TIFFTAG_SOFTWARE=IrfanView TIFFTAG_XRESOLUTION=72 TIFFTAG_YRESOLUTION=72 Image Structure Metadata: COMPRESSION=PACKBITS INTERLEAVE=BAND MINISWHITE=YES Corner Coordinates: Upper Left ( 600769.027, 4863785.940) Lower Left ( 600769.037, 4862380.940) Upper Right ( 602374.027, 4863785.939) Lower Right ( 602374.037, 4862380.939) Center ( 601571.532, 4863083.440) Band 1 Block=16050x4 Type=Byte, ColorInterp=Palette Image Structure Metadata: NBITS=1 Color Table (RGB with 2 entries) 0: 255,255,255,255 1: 0,0,0,255
From gdalinfo it is possible to note:
- No CRS definition. An image without CRS cannot be displayed on GeoServer.
- Color Interpretation as palette.
- A GeoTransformation matrix is associated.
- Tiles Striped (16050x4).
- Packbits Compression.
Executing the following commands on the tiff images:
gdalwarp -t_srs EPSG:25832 -of vrt $f ${f:0:10}_temp.vrt gdal_translate -expand gray -of vrt ${f:0:10}_temp.vrt ${f:0:10}.vrt
The first operation sets the CRS to each image and creates a VRT file, for reducing space occupation. Also the use of gdalwarp internally performs the GeoTransformation associated to the image.
The second operation is used for changing the color interpretation from palette to gray. This operation is done because in the final steps other grey levels will be added with the interpolation. The expansion to the gray color interpretation leads to an expansion of the pixel dimension from 1 to 8 bits.
Note
The expand gray option does not create a multi banded image but only a single banded one.
Note
If the user wants to keep the palette, then can go directly to the Optional elaboration without expanding the Palette paragraph.
These operations must be executed inside a script:
Linux:
#!/bin/bash FILES="*.TIF" echo start for f in $FILES do echo $f gdalwarp -t_srs EPSG:25832 -of vrt $f ${f:0:10}_temp.vrt gdal_translate -expand gray -of vrt ${f:0:10}_temp.vrt ${f:0:10}.vrt done echo stop
Windows:
for /R %%f in (*.tif) do ( gdalwarp -t_srs EPSG:25832 -of vrt %%~f %%~f_temp.vrt gdal_translate -expand gray -of vrt %%~f_temp.vrt %%~f.vrt )
Listing of all the VRT files into a single text list with the following command:
ls *.vrt > list.txt (Linux) or dir /b *.vrt > list.txt (Windows)
Warning
Delete the _temp.vrt files from the list because they overlap with the final vrt files created.
Merging of all the input files with the gdalbuildvrt command:
gdalbuildvrt -srcnodata 255 -vrtnodata 255 -input_file_list list.txt merged_vrt.vrt
Parameters used:
- -srcnodata 255 -vrtnodata 255 : definition of the No Data associated with the file.
- -input_file_list list.txt : definition of input files to elaborate.
The gdalinfo output on the merged image is:
Driver: VRT/Virtual Raster Files: merged_vrt_nodata.vrt 20E27_1994.vrt ~ 20E60_1995.vrt Size is 50052, 62047 Coordinate System is: PROJCS["ETRS89 / UTM zone 32N", GEOGCS["ETRS89", DATUM["European_Terrestrial_Reference_System_1989", SPHEROID["GRS 1980",6378137,298.257222101, AUTHORITY["EPSG","7019"]], TOWGS84[0,0,0,0,0,0,0], AUTHORITY["EPSG","6258"]], PRIMEM["Greenwich",0, AUTHORITY["EPSG","8901"]], UNIT["degree",0.0174532925199433, AUTHORITY["EPSG","9122"]], AUTHORITY["EPSG","4258"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",9], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AXIS["Easting",EAST], AXIS["Northing",NORTH], AUTHORITY["EPSG","25832"]] Origin = (600768.734234663190000,4863785.940434861000000) Pixel Size = (0.100000372821407,-0.100000372821407) Corner Coordinates: Upper Left ( 600768.734, 4863785.940) ( 10d15'18.69"E, 43d55'13.06"N) Lower Left ( 600768.734, 4857581.217) ( 10d15'14.46"E, 43d51'51.99"N) Upper Right ( 605773.953, 4863785.940) ( 10d19' 3.07"E, 43d55'10.54"N) Lower Right ( 605773.953, 4857581.217) ( 10d18'58.64"E, 43d51'49.47"N) Center ( 603271.344, 4860683.579) ( 10d17' 8.72"E, 43d53'31.28"N) Band 1 Block=128x128 Type=Byte, ColorInterp=Gray NoData Value=255
Transforming from VRT to GeoTiff with gdal_translate:
gdal_translate -a_nodata none -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" -co "TILED=YES" -co "BIGTIFF=YES" -co "COMPRESS=DEFLATE" merged_vrt.vrt merged_tif.tif
The various input parameters are:
-a_nodata none : avoid setting 255 as No Data for a better image optimization.
-co “BLOCKXSIZE=512” -co “BLOCKYSIZE=512” -co “TILED=YES” : definition of the tile dimensions.
-co “BIGTIFF=YES” -co “COMPRESS=DEFLATE” : definition of the compression method.
Note
BIGTIFF=YES must be set for big images because when compression is used, by default gdal_translate is not able to check if the final image is a BigTiff or not.
From gdalinfo:
Driver: GTiff/GeoTIFF Files: merged_tif.tif Size is 50052, 62047 Coordinate System is: PROJCS["ETRS89 / UTM zone 32N", GEOGCS["ETRS89", DATUM["European_Terrestrial_Reference_System_1989", SPHEROID["GRS 1980",6378137,298.2572221010002, AUTHORITY["EPSG","7019"]], TOWGS84[0,0,0,0,0,0,0], AUTHORITY["EPSG","6258"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4258"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",9], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AUTHORITY["EPSG","25832"]] Origin = (600768.734234663190000,4863785.940434861000000) Pixel Size = (0.100000372821407,-0.100000372821407) Metadata: AREA_OR_POINT=Area Image Structure Metadata: COMPRESSION=DEFLATE INTERLEAVE=BAND Corner Coordinates: Upper Left ( 600768.734, 4863785.940) ( 10d15'18.69"E, 43d55'13.06"N) Lower Left ( 600768.734, 4857581.217) ( 10d15'14.46"E, 43d51'51.99"N) Upper Right ( 605773.953, 4863785.940) ( 10d19' 3.07"E, 43d55'10.54"N) Lower Right ( 605773.953, 4857581.217) ( 10d18'58.64"E, 43d51'49.47"N) Center ( 603271.344, 4860683.579) ( 10d17' 8.72"E, 43d53'31.28"N) Band 1 Block=512x512 Type=Byte, ColorInterp=Gray
This image can be displayed on GeoServer but a further optimization step could bring to better performances. There could be two ways for optimizing the GeoServer performances:
- building image overviews.
- building a pyramid of the image.
(Optional) Optimization methods. Here are described two possible optimizations each of them using a different interpolation type:
Creation of the overviews associated to the merged image for having better throughput:
gdaladdo -r average --config COMPRESS_OVERVIEW DEFLATE --config GDAL_TIFF_OVR_BLOCKSIZE 512 merged_tif.tif 2 4 8 16 32 64 128
Overviews are reduced views of the input image used by GeoServer for displaying the image at a lower resolutions.
Parameters used:
- -r average : definition of the interpolation method.
- –config COMPRESS_OVERVIEW DEFLATE : definition of the compression on overviews.
- –config GDAL_TIFF_OVR_BLOCKSIZE 512 : definition of the tile dimensions on overviews.
- 2 ~ 128 : definition of the overviews level
And with gdalinfo:
Driver: GTiff/GeoTIFF Files: merged_tif.tif Size is 50052, 62047 ~ Band 1 Block=512x512 Type=Byte, ColorInterp=Gray Overviews: 25026x31024, 12513x15512, 6257x7756, 3129x3878, 1565x1939, 783x970, 392x485
Then the result can be displayed in GeoServer by configuring the image as a GeoTiff (see Adding a GeoTiff section).
(Optional) Creation of a pyramid associated to the merged image and displaying the image on GeoServer with the ImagePyramid plugin (see Advanced Mosaic and Pyramid configuration section).
For building a pyramid the gdalwarp command must be called several times. The operation to execute on the first image is:
gdalwarp -r cubic -multi -tr 0,200000745642814 -0,200000745642814 -co BLOCKXSIZE=512 -co BLOCKYSIZE=512 -co TILED=YES -co COMPRESS=DEFLATE merged_tif.tif merged_tif_2.tif
The parameters are:
- -r cubic : definition of the interpolation method (average interpolation can be used only with GDAL 1.10).
- -multi : forcing to use multithreading.
- -tr 0,200000745642814 -0,200000745642814 : definition of the image resolution.
From gdalinfo on the result image:
Driver: GTiff/GeoTIFF Files: merged_tif_2.tif Size is 25026, 31024 ~ Band 1 Block=512x512 Type=Byte, ColorInterp=Gray
Then the same operation, with another value for the resolution must be executed on the result image:
gdalwarp -r cubic -multi -tr 0,400001491285628 -0,400001491285628 -co BLOCKXSIZE=512 -co BLOCKYSIZE=512 -co TILED=YES -co COMPRESS=DEFLATE merged_tif_2.tif merged_tif_4.tif
These operation must be repeated until the final image has a resolution 128 times lower than that of the initial image.
Note
Each call of gdalwarp reduces by half the image resolution.
After creating the various rasters, they must be saved inside a new directory. This directory must be internally divided into sub-directories numbered from 1 to 7, each of them containing a raster of smaller dimension(going from 1 to 7) and leaving the original raster in the super-directory.
Then the user can configure the following structure with the ImagePyramid plugin.
Displaying the result on GeoServer:
If the user wants to keep the palette the steps to achieve are quite similar.
Executing the following commands on the tiff images:
gdalwarp -t_srs EPSG:25832 -of vrt $f ${f:0:10}_temp.vrt gdal_translate -of vrt ${f:0:10}_temp.vrt ${f:0:10}.vrt
These operations must be executed inside a script:
Linux:
#!/bin/bash FILES="*.TIF" echo start for f in $FILES do echo $f gdalwarp -t_srs EPSG:25832 -of vrt $f ${f:0:10}_temp.vrt gdal_translate -of vrt ${f:0:10}_temp.vrt ${f:0:10}.vrt done echo stop
Windows:
for /R %%f in (*.tif) do ( gdalwarp -t_srs EPSG:25832 -of vrt %%~f %%~f_temp.vrt gdal_translate -of vrt %%~f_temp.vrt %%~f.vrt )
Listing of all the VRT files into a single text list with the following command:
ls *.vrt > list.txt (Linux) or dir /b *.vrt > list.txt (Windows)
Warning
Delete the _temp.vrt files from the list because they overlap with the final vrt files created.
Merging of all the input files with the gdalbuildvrt command:
gdalbuildvrt -srcnodata 0 -vrtnodata 0 -input_file_list list.txt merged_vrt.vrt
Parameters used:
- -srcnodata 0 -vrtnodata 0 : definition of the No Data associated with the file.
- -input_file_list list.txt : definition of input files to elaborate.
The gdalinfo output on the merged image is:
Driver: VRT/Virtual Raster Files: merged_vrt_0.vrt 20E27_1994.TIF.vrt ~ 20E60_1995.TIF.vrt Size is 50052, 62047 Coordinate System is: PROJCS["ETRS89 / UTM zone 32N", GEOGCS["ETRS89", DATUM["European_Terrestrial_Reference_System_1989", SPHEROID["GRS 1980",6378137,298.257222101, AUTHORITY["EPSG","7019"]], TOWGS84[0,0,0,0,0,0,0], AUTHORITY["EPSG","6258"]], PRIMEM["Greenwich",0, AUTHORITY["EPSG","8901"]], UNIT["degree",0.0174532925199433, AUTHORITY["EPSG","9122"]], AUTHORITY["EPSG","4258"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",9], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AXIS["Easting",EAST], AXIS["Northing",NORTH], AUTHORITY["EPSG","25832"]] Origin = (600768.734234663190000,4863785.940434861000000) Pixel Size = (0.100000372821407,-0.100000372821407) Corner Coordinates: Upper Left ( 600768.734, 4863785.940) ( 10d15'18.69"E, 43d55'13.06"N) Lower Left ( 600768.734, 4857581.217) ( 10d15'14.46"E, 43d51'51.99"N) Upper Right ( 605773.953, 4863785.940) ( 10d19' 3.07"E, 43d55'10.54"N) Lower Right ( 605773.953, 4857581.217) ( 10d18'58.64"E, 43d51'49.47"N) Center ( 603271.344, 4860683.579) ( 10d17' 8.72"E, 43d53'31.28"N) Band 1 Block=128x128 Type=Byte, ColorInterp=Palette NoData Value=0 Color Table (RGB with 2 entries) 0: 255,255,255,255 1: 0,0,0,255
Transforming from VRT to GeoTiff with gdal_translate:
gdal_translate -co "NBITS=1" -co "BLOCKXSIZE=512" -co "BLOCKYSIZE=512" -co "TILED=YES" -co "BIGTIFF=YES" -co "COMPRESS=DEFLATE" merged_vrt.vrt merged_tif.tif
The various input parameters are:
-co “NBITS=1” : sets the bits per pixel to 1, because the Palette contains only 0 or 1.
-co “BLOCKXSIZE=512” -co “BLOCKYSIZE=512” -co “TILED=YES” : definition of the tile dimensions.
-co “BIGTIFF=YES” -co “COMPRESS=DEFLATE” : definition of the compression method.
Note
BIGTIFF=YES must be set for big images because when compression is used, by default gdal_translate is not able to check if the final image is a BigTiff or not.
From gdalinfo:
Size is 50052, 62047 Coordinate System is: PROJCS["ETRS89 / UTM zone 32N", GEOGCS["ETRS89", DATUM["European_Terrestrial_Reference_System_1989", SPHEROID["GRS 1980",6378137,298.2572221010002, AUTHORITY["EPSG","7019"]], TOWGS84[0,0,0,0,0,0,0], AUTHORITY["EPSG","6258"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4258"]], PROJECTION["Transverse_Mercator"], PARAMETER["latitude_of_origin",0], PARAMETER["central_meridian",9], PARAMETER["scale_factor",0.9996], PARAMETER["false_easting",500000], PARAMETER["false_northing",0], UNIT["metre",1, AUTHORITY["EPSG","9001"]], AUTHORITY["EPSG","25832"]] Origin = (600768.734234663190000,4863785.940434861000000) Pixel Size = (0.100000372821407,-0.100000372821407) Metadata: AREA_OR_POINT=Area Image Structure Metadata: COMPRESSION=DEFLATE INTERLEAVE=BAND Corner Coordinates: Upper Left ( 600768.734, 4863785.940) ( 10d15'18.69"E, 43d55'13.06"N) Lower Left ( 600768.734, 4857581.217) ( 10d15'14.46"E, 43d51'51.99"N) Upper Right ( 605773.953, 4863785.940) ( 10d19' 3.07"E, 43d55'10.54"N) Lower Right ( 605773.953, 4857581.217) ( 10d18'58.64"E, 43d51'49.47"N) Center ( 603271.344, 4860683.579) ( 10d17' 8.72"E, 43d53'31.28"N) Band 1 Block=512x512 Type=Byte, ColorInterp=Palette NoData Value=0 Image Structure Metadata: NBITS=1 Color Table (RGB with 2 entries) 0: 255,255,255,255 1: 0,0,0,255
(Optional) Optimization methods described here are similar to that described above:
- The overview creation method is equal to that described above.
- For creating the pyramid the commands to use are the same as described above with the addition of the -co “NBITS=1” command.
Displaying the result on GeoServer:
Advanced Vectorial Data Management¶
This module presents working with vector data, how to obtain vector data information, filter, extract and update.
In this module you will learn how to:
Retrieving vector data and metadata¶
In this section we will learn how to deal with vector data using WFS. First we will learn how to deal with metadata and then how to retrieve the features. We will be using the layer named Counties
in the workshop namespace.
Note
The Open Geospatial Consortium Web Feature Service Interface Standard (WFS) provides an interface allowing requests for geographical features across the web using platform-independent calls. One can think of geographical features as the “source code” behind a map, whereas the WMS interface or online mapping portals like Google Maps return only an image, which end-users cannot edit or spatially analyze.
Navigate to the GeoServer Welcome Page.
On the Welcome page locate the Layer Preview link (no need to login).
Layer Preview
Navigate to the WFS GML output of the Counties layer.
WFS GML output
Depending on the browser, the output may be unformatted or recognized as XML. Here is what Firefox 3 shows: http://localhost:8083/geoserver/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=geosolutions:Counties&maxFeatures=50&outputFormat=GML2
Default WFS layer preview.
Note
We recommend the Mozilla Firefox web browser for navigating WFS response documents.
Now that we know the quick and easy way to get WFS data, let’s go back and do it the way a standard WFS client works. First, the only thing expected to be known is the WFS server url: http://localhost:8083/geoserver/ows?service=WFS&version=1.0.0
Using that url, we can issue a
GetCapabilities
request in order to know which layer it contains and what operations are supported: http://localhost:8083/geoserver/ows?service=WFS&version=1.0.0&request=GetCapabilitiesGetCapabilities response
If we scroll below, we will find the
Counties
feature type described:GetCapabilities response (
Counties
feature type)- Now let’s request more information for the
Counties
layer using aDescribeFeatureType
request:
Which gives us information about the fields names and types as well as the geometry type, in this case
MultiPolygon
.DescribeFeatureType response for Counties feature type
- Now let’s request more information for the
After that, we can issue a basic
GetFeature
request, that looks like this:http://localhost:8083/geoserver/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=geosolutions:Counties&featureId=Counties.1
Note
Notice it’s almost the same as the one that Geoserver generated, but it’s requestin a single feature specifying its identifier via
featureId=Counties.1
In the next section we will see how to filter the WFS output based on various attributes.
Filtering and Extracting vector data¶
WFS also defines mechanisms to only retrieve a subset of the data that matches some specified constraints.
Create a new
request.xml
file in the training root and past the following request into it:<wfs:GetFeature xmlns:wfs='http://www.opengis.net/wfs' xmlns:ogc='http://www.opengis.net/ogc' service='WFS' version='1.0.0'> <Query typeName='geosolutions:WorldCountries'> <ogc:Filter> <ogc:FeatureId fid='WorldCountries.137' /> </ogc:Filter> </Query> </wfs:GetFeature>
Go on the command line, move to the training folder root, and execute the request using CURL:
curl -XPOST -d @request.xml -H "Content-type: application/xml" "http://localhost:8083/geoserver/ows"
Now, let’s write an equivalent request - using the name of the state instead of the
id
- issuing aGET
and encoding the filter in a language called CQL. Copy the following URL in your browser’s navigation bar:http://localhost:8083/geoserver/wfs?request=GetFeature&service=WFS&version=1.0.0&typeName=geosolutions:WorldCountries&outputFormat=GML2&CQL_FILTER=NAME=%27Monaco%27
The CQL filter in the Firefox URL bar
The results of the CQL filter
That’s how a feature set is filtered with either the OGC encoding or the CQL notation.
In the next section we will see how to edit the features via a protocol called WFS Transactional (WFS-T).
Modifying Feature Types¶
GeoServer provides a fully Transactional Web Feature Service (WFS-T) which enables users to insert/delete/modify the avilable FeatureTypes. This section shows a few of the GeoServer WFS-T capabilities and interactions with GIS clients.
Modifying Feature Types using GeoNode¶
Open your instance of GeoNode and log in as a superuser or a user having write rigths on at least some Layers
GeoNode Layers
Select a Layer on which whon you have right to edit data
Warning
You can edit only Layers which have been stored on a JDBC DataStore, like a DataBase. On GeoNode this is only possible if the
DB datastore
has been enabled from thesettings
.GeoNode Layer Select
Click on Edit Layer and then, from the pop-up window, click on Edit data
Warning
The Edit data button will be available only for writable Layers (see above).
GeoNode Edit Layer
When the Map shows up along with your Layer, zoom in to a region you want to update or create.
GeoNode Navigate Layer
Identify the Edit button on the map top toolbar, click on the small arrow on the left in order to show up the context menu.
GeoNode Edit Button
Lets first Modify a FeatureType. Click on Modify.
GeoNode Modify FeatureType
Select a geometry and click over it. From the small info dialog window, select Edit
GeoNode Editing a FeatureType
Modify the geometry and/or the values of the field as you wish, and then click on Save.
Hint
If you want you can also completely delete the FeatureType by clicking on the Delete button from the same info dialog window.
GeoNode Updating a FeatureType
Verify that the changes have been stored on GeoServer.
Replace the URL
http://your_host/maps/new?layer=geonode:streams_1
with
http://your_host/geoserver/wms/reflect?layers=geonode:streams_1
Warning
Pay attention to the parameter:
layer
becomeslayers
, plural. If you want you can also add an output format parameter, likeformat=openlayers
. In that case the complete URL becomes:http://your_host/geoserver/wms/reflect?layers=geonode:streams_1&format=openlayers
GeoServer Displaying the Updated Layer
Click over the FeautreType in order to display the updates values too.
Repeat the FeatureType editing but this time click on Create (or simply click over the Edit button and not on its right small arrow).
GeoNode Creating a FeatureType
GeoServer Displaying the New Feature
Modifying Feature Types using a Desktop GIS client¶
Open uDig GIS desktop client by going on the command line, changing directory in the training root if necessary, and running the
udig
commmand.Add GeoServer WFS to the catalog.
Use the import button in the catalog tab, and select “data” in the first page of the wizard
Selection of Web Feature Service data
Insert into the URL text box the following address:
http://localhost:8083/geoserver/wfs?request=GetCapabilities&service=WFS
The WFS URL
Select the Mainrd from the list
WFS Datasets shown into the uDig catalog
Load the Mainrd Feature Type using drag-n-drop.
Importing Mainrd into the map
Perform a zoom operation on the upper-right part of the layer.
Zooming in …
Zooming in …
By using the Select and Edit Geometry tool try to move/add/remove some vertex to the small line at the center of the screen.
Playing with the Geometry
Once finished use the Commit tool to persist the changes on GeoServer.
Committing changes throught the WFS-T protocol
Use GeoServer Layer Preview to view the changes on the Mainrd layer.
Warning
In order to view the streets lines you have to specify the line style on the GetMap request.
Showing the changes to the Mainrd Feature Type
On uDig look the Feature attribute values using the Info tool.
Retrieving Feature Type info from uDig interface
Now open/create the
request.xml
file in the training root dir and set in the following requst, which will be used to issue an Update Feature type request to the WFS-T updating all roads labelled asMonarch Rd
toMonarch Road
<wfs:Transaction xmlns:topp="http://www.openplans.org/topp" xmlns:ogc="http://www.opengis.net/ogc" xmlns:wfs="http://www.opengis.net/wfs" service="WFS" version="1.0.0"> <wfs:Update typeName="geosolutions:Mainrd"> <wfs:Property> <wfs:Name>LABEL_NAME</wfs:Name> <wfs:Value>Monarch Road</wfs:Value> </wfs:Property> <ogc:Filter> <ogc:PropertyIsEqualTo> <ogc:PropertyName>LABEL_NAME</ogc:PropertyName> <ogc:Literal>Monarch Rd</ogc:Literal> </ogc:PropertyIsEqualTo> </ogc:Filter> </wfs:Update> </wfs:Transaction>
Issue the WFS-T request towards GeoServer using curl on the command line:
curl -XPOST -d @request.xml -H "Content-type: application/xml" "http://localhost:8083/geoserver/ows"
The response should be a TransactionResponse XML document containing a
wfs:SUCCESS
elementAsk the info again using the uDig Info tool …
Note
In order to issue a GetFeatureInfo request from the OpenLayers MapPreview tool, just left-click over the line.
Obtaining the updated Feature Type info from uDig interface
Finally, obtain the Feature type info using the GetFeatureInfo operation issued directly by the Map Preview .
Obtaining the updated Feature Type info from OpenLayers MapPreview GetFeatureInfo
Spatial Processing with GeoNode¶
Spatial Processing with Sextante¶
Once you have connected to a GeoNode service and its data is available from QGIS, analysis can be performed on it, just as if you were using local data.
SEXTANTE is the analysis and data processing framework of QGIS, and it can be used to run a large number of different analysis algorithms for both raster and vector data. Both WCS and WFS connections can be used to get data that can be processes through SEXTANTE.
The SEXTANTE toolbox is the main component to call processing algorithms, and it contains a list all of available ones, organized by providers.

The QGIS algorithms group contains native python algorithms. Most of the remaining ones represent algorithms that depend on a third-party application. From the point of view of the user, however, there is no difference in the way these algorithms are executed, since they share a common UI and SEXTANTE takes care of the communication between the application and QGIS.
To run an algorithm, just click on its name in the toolbox, and a dialog will appear to define the inputs and outputs of the selected algorithm.

All dialogs used to entering the parameters needed for an algorithm are created on-the-fly, and they all share the same look and feel, no matter which application the algorithm relies on.
If the selected algorithm requires vector layers, all loaded vector layers will be available, including those from WFS connections. If it requires raster layers, all loaded raster layers will be available, including those from WCS connections.
Click on Ok and the algorithm will be run and its outputs loaded in the QGIS canvas.
Algorithm depending on third-party application can be called in the same way. WFS and WMS layers will be available as well, even if the application does not support web services. SEXTANTE will take care of creating an intermediate file or selecting a compatible way of feeding the data into the application, performing all necessary import operations under the hood.
For instance, you can run algorithm that uses the R statistical computing software to perform a statistical analysis on the data loaded from a WFS connection.

The interface to define the input parameters is very similar to the one we saw in the previous example (and to those of all the other algorithms), and using an external application makes no difference from the point of view of the user.

SEXTANTE algorithms are aware of the state of layers used for input. That means that they can take into account whether there is a selection in a vector layer, even in the case of calling an external application. In the SEXTANTE configuration menu, in the General group, check the “Use selected features” option to enable this behavior.
image:: use_selected.png
Now you can make a selection and call the R algorithm used before. It will just use those features that you selected.
Notice that SEXTANTE provides the context for seamlessly integrating all the pieces and allowing to easily work with all of them together. In this case, GeoServer is providing the data, QGIS is providing the interface where we perform the selection, and R is providing the processing. SEXTANTE is just the mediator between these elements.
Using SEXTANTE we can interact with a GeoServer instance not just to consume its data, but to import into it, having a bidirectional connection.
In the GeoServer/PostGIS tools group, you will find some tools that you can use to create a new workspace, import a layer or import a style, among others.

Styles are imported using an SLD file. Since QGIS supports exporting the style of a layer to SLD, you can create you styling using QGIS, then export it, and then use the corresponding SEXTANTE algorithm to add it to your GeoServer. To create and export a style, just right-click on a layer name in the TOC, select Properties, and then move to the Style tab. Clink on Save Style…” and the select *SLD style.

Calling algorithms from the toolbox is the most common way of processing data with SEXTANTE, but in some cases that might not be the best alternative, specially if we have a workflow that involves a large number of different processes. SEXTANTE includes a graphical modeler that allow to create complex workflows that can be later executed as a single process, thus simplifying the process.
The modeler interface is started from the SEXTANTE menu, using the “SEXTANTE modeler” menu.

Creating a model involves adding a set of inputs and then the algorithms to use on then, defining the links between them.
As an example, the model in the figure below takes a points layer, interpolates a raster layer from it and then extracts contour lines for that resulting raster layer, given an interval. The final layer is imported into a GeoServer instance. Used with a WFS service, this can be used to easily create an additional lines layer for a given points layer in a GeoServer instance, such as one containing temperature data for a set of sensors.

The model can be added to the toolbox and run as any of the built-in algorithms.

Its parameters dialog is also created automatically from the inputs it takes and the outputs it produces.

SEXTANTE exposes its API through the QGIS python console, including not just all its available algorithms, but also several tools for easy handling both vector and raster data and creating new geoalgorithms. Users familiar with python will be able to create much more complex models with ease, and to automate tasks that involve processing.
algorithms are executed ussing the runalg
method, passing the name of the algorithm and the parameters it requires. Importing a layer into geoserver can be done from the QGIS console using code like the one shown below:
import sextante
filename = "/home/gisdata/example.shp"
sextante.runalg("gspg:importvectorintogeoserver","http://localhost:8080/geoserver/rest","admin","geoserver",filename,"workspace_name")
Algorithms in the gspg
namespace include utilities for interacting with both GeoServer and PostGIS.
If the layer is loaded into QGIS, there is no need to enter the filename. The layer object can be obtained with the getobject()
method and then passed to the algorithm call instead of the filename. For a layer named mylayer, that would be:
import sextante
layer = sextante.getobject("mylayer")
sextante.runalg("gspg:importvectorintogeoserver","http://localhost:8080/geoserver/rest","admin","geoserver",layer,"workspace_name")
Finally, a last productivity tool is available in SEXTANTE: the batch processing interface. Repeated calls to a single algorithm are simplified by using the batch processing interface. This can be used, for instance, to perform a bulk import of layers into GeoServer, by setting the batch processing interface to call the Import into GeoServer algorithm as many times as layers are to be imported.
To open the batch processing interface, right-click on the name of an algorithm in the toolbox and select Run as batch process.

Each row in the table (3 by default) represents a single execution, and more rows can be added manually, or will be automatically added when selecting a set of input layers, each of them to be used as input in a different execution of the algorithm.

Models can also be run as a batch process. The model defined above, which computed a set of contour lines from a points layer and imported the result into GeoServer, can be called repeatedly using the same input layer and different intervals, to get contour layers of different level of detail, suitable to be rendered at different scales.
More detailed documentation about SEXTANTE can be found at a dedicated chapter in the current QGIS manual, at http://docs.qgis.org/html/en/docs/user_manual/sextante/index.html. For the most up-to-date version, check the corresponding entry at the QGIS documentation github repository, at https://github.com/qgis/QGIS-Documentation/tree/master/source/docs/user_manual/sextante
- Adding Data to GeoServer
- Learn how to add data to GeoServer.
- Pretty maps with GeoServer
- Learn how to create pretty styles for the Maps in GeoServer.
- Advanced Raster Data Management
- Learn advanced tecquinques for the delivery of Raster Data with GeoServer.
- Advanced Vectorial Data Management
- Learn advanced tecquinques for the delivery of Vectorial Data with GeoServer.
- Spatial Processing with GeoNode
- Learn how to do Spatial Processing using external tools.
GeoNode Advanced Configuration¶
Here you will find information about each and every component of Geonode, for example geoserver, geonode settings, security, etc.
Settings¶
Here’s a list of settings available in GeoNode and their default values. This includes settings for some external applications that GeoNode depends on.
Documents settings¶
Here’s a list of settings available for the Documents app in GeoNode.
ALLOWED_DOCUMENT_TYPES¶
Default: ['doc', 'docx', 'xls', 'xlsx', 'pdf', 'zip', 'jpg', 'jpeg', 'tif', 'tiff', 'png', 'gif', 'txt']
A list of acceptable file extensions that can be uploaded to the Documents app.
MAX_DOCUMENT_SIZE¶
Default: 2
Metadata settings¶
CATALOGUE¶
A dict with the following keys:
- ENGINE: The CSW backend (default is
geonode.catalogue.backends.pycsw_local
) - URL: The FULLY QUALIFIED base url to the CSW instance for this GeoNode
- USERNAME: login credentials (if required)
- PASSWORD: login credentials (if required)
pycsw is the default CSW enabled in GeoNode. pycsw configuration directives are managed in the PYCSW entry.
PYCSW¶
A dict with pycsw’s configuration. Of note are the sectionsmetadata:main
to set CSW server metadata andmetadata:inspire
to set INSPIRE options. Settingmetadata:inspire['enabled']
totrue
will enable INSPIRE support. Server level configurations can be overridden in theserver
section. See http://docs.pycsw.org/en/latest/configuration.html for full pycsw configuration details.
DEFAULT_TOPICCATEGORY¶
Default: 'location'
The identifier of the default topic category to use when uploading new layers. The value specified for this setting must be
present in the TopicCategory table or GeoNode will return a TopicCategory.DoesNotExist
exception.
MODIFY_TOPICCATEGORY¶
Default: False
Metadata Topic Categories list should not be modified, as it is strictly defined by ISO (See: http://www.isotc211.org/2005/resources/Codelist/gmxCodelists.xml and check the <CodeListDictionary gml:id=”MD_MD_TopicCategoryCode”> element).
Some customisation it is still possible changing the is_choice and the GeoNode description fields.
In case it is absolutely necessary to add/delete/update categories, it is possible to set the MODIFY_TOPICCATEGORY setting to True.
Maps settings¶
DEFAULT_MAP_BASE_LAYER¶
The name of the background layer to include in newly created maps.
DEFAULT_MAP_CENTER¶
Default: (0, 0)
A 2-tuple with the latitude/longitude coordinates of the center-point to use in newly created maps.
DEFAULT_MAP_ZOOM¶
Default: 0
The zoom-level to use in newly created maps. This works like the OpenLayers zoom level setting; 0 is at the world extent and each additional level cuts the viewport in half in each direction.
MAP_BASELAYERS¶
Default:
MAP_BASELAYERS = [{
"source": {
"ptype": "gxp_wmscsource",
"url": OGC_SERVER['default']['PUBLIC_LOCATION'] + "wms",
"restUrl": "/gs/rest"
}
},{
"source": {"ptype": "gxp_olsource"},
"type":"OpenLayers.Layer",
"args":["No background"],
"visibility": False,
"fixed": True,
"group":"background"
}, {
"source": {"ptype": "gxp_osmsource"},
"type":"OpenLayers.Layer.OSM",
"name":"mapnik",
"visibility": False,
"fixed": True,
"group":"background"
}, {
"source": {"ptype": "gxp_mapquestsource"},
"name":"osm",
"group":"background",
"visibility": True
}, {
"source": {"ptype": "gxp_mapquestsource"},
"name":"naip",
"group":"background",
"visibility": False
}, {
"source": {"ptype": "gxp_bingsource"},
"name": "AerialWithLabels",
"fixed": True,
"visibility": False,
"group":"background"
},{
"source": {"ptype": "gxp_mapboxsource"},
}, {
"source": {"ptype": "gxp_olsource"},
"type":"OpenLayers.Layer.WMS",
"group":"background",
"visibility": False,
"fixed": True,
"args":[
"bluemarble",
"http://maps.opengeo.org/geowebcache/service/wms",
{
"layers":["bluemarble"],
"format":"image/png",
"tiled": True,
"tilesOrigin": [-20037508.34, -20037508.34]
},
{"buffer": 0}
]
}]
A list of dictionaries that specify the default map layers.
LAYER_PREVIEW_LIBRARY¶
Default: "leaflet"
The library to use for display preview images of layers. The library choices are:
"leaflet"
"geoext"
OGC_SERVER¶
Default: {}
(Empty dictionary)
A dictionary of OGC servers and and their options. The main
server should be listed in the ‘default’ key. If there is no ‘default’
key or if the OGC_SERVER
setting does not exist Geonode will raise
an Improperly Configured exception. Below is an example of the OGC_SERVER
setting:
OGC_SERVER = {
'default' : {
'LOCATION' : 'http://localhost:8080/geoserver/',
'USER' : 'admin',
'PASSWORD' : 'geoserver',
}
}
Default: "geonode.geoserver"
The OGC server backend to use. The backend choices are:
'geonode.geoserver'
Default: True
Specifies whether the OGC server can be written to. If False, actions that modify data on the OGC server will not execute.
Default: ''
(Empty string)
An optional string that represents the name of a vector datastore that Geonode uploads
are imported into. In order to support vector datastore imports there also needs to be an
entry for the datastore in the DATABASES
dictionary with the same name. Example:
OGC_SERVER = {
'default' : {
'LOCATION' : 'http://localhost:8080/geoserver/',
'USER' : 'admin',
'PASSWORD' : 'geoserver',
'DATASTORE': 'geonode_imports'
}
}
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': 'development.db',
},
'geonode_imports' : {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': 'geonode_imports',
'USER' : 'geonode_user',
'PASSWORD' : 'a_password',
'HOST' : 'localhost',
'PORT' : '5432',
}
}
Default: False
A boolean that represents whether the OGC server supports GeoGig datastores.
Default: True
A boolean that represents whether Geonode’s security application is enabled.
Default: "http://localhost:8080/geoserver/"
A base URL from which GeoNode can construct OGC service URLs. If using Geoserver you can determine this by visiting the GeoServer administration home page without the /web/ at the end. For example, if your GeoServer administration app is at http://example.com/geoserver/web/, your server’s location is http://example.com/geoserver.
Default: True
A boolean that represents whether the Mapfish printing extension is enabled on the server.
Default: True
A boolean that represents whether printing of maps and layers is enabled.
Default: "http://localhost:8080/geoserver/"
The URL used to in most public requests from Geonode. This settings allows a user to write to one OGC server (the LOCATION setting) and read from a seperate server or the PUBLIC_LOCATION.
SITEURL¶
Default: 'http://localhost:8000/'
A base URL for use in creating absolute links to Django views and generating links in metadata.
Proxy settings¶
PROXY_ALLOWED_HOSTS¶
Default: ()
(Empty tuple)
A tuple of strings representing the host/domain names that GeoNode can proxy requests to. This is a security measure to prevent an attacker from using the GeoNode proxy to render malicious code or access internal sites.
Values in this tuple can be fully qualified names (e.g. ‘www.geonode.org’), in which case they will be matched against
the request’s Host header exactly (case-insensitive, not including port). A value beginning with a period can be used
as a subdomain wildcard: .geonode.org
will match geonode.org, www.geonode.org, and any other subdomain of
geonode.org. A value of ‘*’ will match anything and is not recommended for production deployments.
PROXY_URL¶
Default /proxy/?url=
The url to a proxy that will be used when making client-side requests in GeoNode. By default, the internal GeoNode proxy is used but administrators may favor using their own, less restrictive proxies.
Search settings¶
DEFAULT_SEARCH_SIZE¶
Default: 10
An integer that specifies the default search size when using geonode.search
for querying data.
Security settings¶
AUTH_EXEMPT_URLS¶
Default: ()
(Empty tuple)
A tuple of url patterns that the user can visit without being authenticated.
This setting has no effect if LOCKDOWN_GEONODE
is not True. For example,
AUTH_EXEMPT_URLS = ('/maps',)
will allow unauthenticated users to
browse maps.
LOCKDOWN_GEONODE¶
Default: False
By default, the GeoNode application allows visitors to view most pages without being authenticated. If this is set to True
users must be authenticated before accessing URL routes not included in AUTH_EXEMPT_URLS
.
RESOURCE_PUBLISHING¶
Default: True
By default, the GeoNode application allows GeoNode staff members to publish/unpublish resources. By default resources are published when created. When this settings is set to True the staff members will be able to unpublish a resource (and eventually publish it back).
Upload settings¶
GEOGIG_DATASTORE_NAME¶
Default: None
A string with the default GeoGig datastore name. This value is only used if no GeoGig datastore name is provided when data is uploaded but it must be populated if your deployment supports GeoGig.
UPLOADER¶
Default:
{
'BACKEND' : 'geonode.rest',
'OPTIONS' : {
'TIME_ENABLED': False,
'GEOGIG_ENABLED': False,
}
}
A dictionary of Uploader settings and and their values.
Default: 'geonode.rest'
The uploader backend to use. The backend choices are:
'geonode.importer'
'geonode.rest'
The importer backend requires the Geoserver importer extension to be enabled and is required for uploading data into GeoGig datastores.
OPTIONS¶
Default:
'OPTIONS' : {
'TIME_ENABLED': False,
'GEOGIG_ENABLED': False,
}
Default: False
A boolean that specifies whether the upload should allow the user to enable time support when uploading data.
Default: False
A boolean that specifies whether the uploader should allow the user to upload data into a GeoGig datastore.
User Account settings¶
REGISTRATION_OPEN¶
Default: False
A boolean that specifies whether users can self-register for an account on your site.
THEME_ACCOUNT_CONTACT_EMAIL¶
Default: 'admin@example.com'
This email address is added to the bottom of the password reset page in case users have trouble un-locking their account.
Download settings¶
DOWNLOAD_FORMATS_METADATA¶
Specifies which metadata formats are available for users to download.
Default:
DOWNLOAD_FORMATS_METADATA = [
'Atom', 'DIF', 'Dublin Core', 'ebRIM', 'FGDC', 'ISO',
]
DOWNLOAD_FORMATS_VECTOR¶
Specifies which formats for vector data are available for users to download.
Default:
DOWNLOAD_FORMATS_VECTOR = [
'JPEG', 'PDF', 'PNG', 'Zipped Shapefile', 'GML 2.0', 'GML 3.1.1', 'CSV',
'Excel', 'GeoJSON', 'KML', 'View in Google Earth', 'Tiles',
]
DOWNLOAD_FORMATS_RASTER¶
Specifies which formats for raster data are available for users to download.
Default:
DOWNLOAD_FORMATS_RASTER = [
'JPEG', 'PDF', 'PNG' 'Tiles',
]
Contrib settings¶
EXIF_ENABED¶
Default: False
A boolean that specifies whether the Exif contrib app is enabled. If enabled, metadata is generated from Exif tags when documents are uploaded.
NLP_ENABED¶
Default: False
A boolean that specifies whether the NLP (Natural Language Processing) contrib app is enabled. If enabled, NLP (specifically MITIE) is used to infer additional metadata from uploaded documents to help fill metadata gaps.
NLP_LIBRARY_PATH¶
Default:: '/opt/MITIE/mitielib'
A string that specifies the location of the MITIE library
NLP_MODEL_PATH¶
Default:: '/opt/MITIE/MITIE-models/english/ner_model.dat'
A string that specifies the location of the NER (Named Entity Resolver). MITIE comes with English and Spanish NER models. Other models can be trained.
SLACK_ENABED¶
Default: False
A boolean that specifies whether the Slack contrib app is enabled. If enabled, GeoNode will send messages to the slack channels specified in SLACK_WEBHOOK_URLS when a document is uploaded, metadata is updated, etc. Coverage of events is still incomplete.
SLACK_WEBHOOK_URLS¶
A list that specifies the urls to post Slack messages to. Each url is for a different channel. The default url should be replaced when slack integration is enabled.
Default:
SLACK_WEBHOOK_URLS = [
"https://hooks.slack.com/services/T000/B000/XX"
]
GeoNode Django Apps¶
The user interface of a GeoNode site is built on top of the Django web framework. GeoNode includes a few “apps” (reusable Django modules) to support development of those user interfaces. While these apps have reasonable default configurations, for customized GeoNode sites you will probably want to adjust these apps for your specific needs.
geonode.base
- GeoNode core functionality¶
Stores core functionality used throughout the GeoNode application.
Template Tags¶
num_ratings <object>
Returns the number of ratings an object has. Example usage:
{% load base_tags %} {% num_ratings map as num_votes %} <p>Map votes: {{num_votes}}.</p>
categories
Returns topic categories and the count of objects in each category.
geonode.documents
- Document creation and management¶
Manages uploaded files that can be related to maps. Documents can be any type of file that is included in the ALLOWED_DOCUMENTS_TYPES setting.
settings.py
Entries¶
- ALLOWED_DOCUMENT_TYPES
Default:
['doc', 'docx', 'xls', 'xslx', 'pdf', 'zip', 'jpg', 'jpeg', 'tif', 'tiff', 'png', 'gif', 'txt']
A list of acceptable file extensions that can be uploaded to the Documents app.
- MAX_DOCUMENT_SIZE
Default:
2
The maximum size (in megabytes) that an upload will be before it gets rejected.
geonode.layers
- Layer creation and geospatial data management¶
This Django app provides support for managing and manipulating single geospatial datasets known as layers.
Models¶
- Attribute - Feature attributes for a layer managed by the GeoNode.
- Layer - A data layer managed by the GeoNode
- Style - A data layer’s style managed by the GeoNode
Views¶
- Creating, viewing, browsing, editing, and deleting layers and their metadata
Template Tags¶
- featured_layers
- Returns the the 7 newest layers.
- layer_thumbnail <layer>
- Returns the layer’s thumbnail.
manage.py
Commands¶
- importlayers
python manage.py importlayers
Brings a data file or a directory full of data files into a GeoNode site. Layers are added to the Django database, the GeoServer configuration, and the GeoNetwork metadata index.
- updatelayers
python manage.py updatelayers
Scan Geoserver for data that has not been added to GeoNode.
geonode.maps
- Map creation and geospatial data management¶
This Django app provides some support for managing and manipulating geospatial datasets. In particular, it provides tools for editing, viewing, and searching metadata for data layers, and for editing, viewing, and searching maps that aggregate data layers to display data about a particular topic.
Models¶
- Map - A collection of data layers composed in a particular order to form a map
- MapLayer - A model that maintains some map-specific information related to a layer, such as the z-indexing order.
Views¶
The maps app provides views for:
- Creating, viewing, browsing, editing, and deleting Maps
These operations require the use of GeoServer to manage map rendering, as well as GeoExt to provide interactive editing and previewing of maps and data layers.
There are also some url mappings in the geonode.maps.urls
module for easy
inclusion in GeoNode sites.
settings.py
Entries¶
- OGC_SERVER
Default:
{}
(Empty dictionary)A dictionary of OGC servers and and their options. The main server should be listed in the ‘default’ key. If there is no ‘default’ key or if the
OGC_SERVER
setting does not exist GeoNode will raise an Improperly Configured exception. Below is an example of theOGC_SERVER
setting:OGC_SERVER = { 'default' : { 'LOCATION' : 'http://localhost:8080/geoserver/', 'USER' : 'admin', 'PASSWORD' : 'geoserver', } }
- BACKEND
Default:
"geonode.geoserver"
The OGC server backend to use. The backend choices are:
'geonode.geoserver'
- BACKEND_WRITE_ENABLED
Default:
True
Specifies whether the OGC server can be written to. If False, actions that modify data on the OGC server will not execute.
- LOCATION
Default:
"http://localhost:8080/geoserver/"
A base URL from which GeoNode can construct OGC service URLs. If using Geoserver you can determine this by visiting the GeoServer administration home page without the /web/ at the end. For example, if your GeoServer administration app is at http://example.com/geoserver/web/, your server’s location is http://example.com/geoserver.
- PUBLIC_LOCATION
Default:
"http://localhost:8080/geoserver/"
The URL used to in most public requests from GeoNode. This settings allows a user to write to one OGC server (the LOCATION setting) and read from a seperate server or the PUBLIC_LOCATION.
- USER
Default:
'admin'
The administrative username for the OGC server as a string.
- PASSWORD
Default:
'geoserver'
The administrative password for the OGC server as a string.
- MAPFISH_PRINT_ENABLED
Default:
True
A boolean that represents whether the Mapfish printing extension is enabled on the server.
- PRINT_NG_ENABLED
Default:
True
A boolean that represents whether printing of maps and layers is enabled.
- GEONODE_SECURITY_ENABLED
Default:
True
A boolean that represents whether GeoNode’s security application is enabled.
- GEOGIT_ENABLED
Default:
False
A boolean that represents whether the OGC server supports Geogig datastores.
- WMST_ENABLED
Default:
False
Not implemented.
- WPS_ENABLED
Default:
False
Not implemented.
- DATASTORE
Default:
''
(Empty string)An optional string that represents the name of a vector datastore that GeoNode uploads are imported into. In order to support vector datastore imports there also needs to be an entry for the datastore in the
DATABASES
dictionary with the same name. Example:OGC_SERVER = { 'default' : { 'LOCATION' : 'http://localhost:8080/geoserver/', 'USER' : 'admin', 'PASSWORD' : 'geoserver', 'DATASTORE': 'geonode_imports' } } DATABASES = { 'default': { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'development.db', }, 'geonode_imports' : { 'ENGINE': 'django.contrib.gis.db.backends.postgis', 'NAME': 'geonode_imports', 'USER' : 'geonode_user', 'PASSWORD' : 'a_password', 'HOST' : 'localhost', 'PORT' : '5432', } }
- GEOSERVER_CREDENTIALS
- Removed in GeoNode 2.0, this value is now specified in the
OGC_SERVER
settings. - GEOSERVER_BASE_URL
- Removed in GeoNode 2.0, this value is now specified in the
OGC_SERVER
settings. - CATALOGUE
A dict with the following keys:
- ENGINE: The CSW backend (default is
geonode.catalogue.backends.pycsw_local
) - URL: The FULLY QUALIFIED base url to the CSW instance for this GeoNode
- USERNAME: login credentials (if required)
- PASSWORD: login credentials (if required)
pycsw is the default CSW enabled in GeoNode. pycsw configuration directives are managed in the PYCSW entry.
- ENGINE: The CSW backend (default is
- PYCSW
- A dict with pycsw’s configuration. Of note are the sections
metadata:main
to set CSW server metadata andmetadata:inspire
to set INSPIRE options. Settingmetadata:inspire['enabled']
totrue
will enable INSPIRE support. Server level configurations can be overridden in theserver
section. See http://pycsw.org/docs/configuration.html for full pycsw configuration details. - SITEURL
Default:
'http://localhost:8000/'
A base URL for use in creating absolute links to Django views.
- DEFAULT_MAP_BASE_LAYER
- The name of the background layer to include in newly created maps.
- DEFAULT_MAP_CENTER
Default:
(0, 0)
A 2-tuple with the latitude/longitude coordinates of the center-point to use in newly created maps.
- DEFAULT_MAP_ZOOM
Default:
0
The zoom-level to use in newly created maps. This works like the OpenLayers zoom level setting; 0 is at the world extent and each additional level cuts the viewport in half in each direction.
geonode.proxy
- Assist JavaScript applications in accessing remote servers¶
This Django app provides some HTTP proxies for accessing data from remote servers, to overcome restrictions imposed by the same-origin policy used by browsers. This helps the GeoExt applications in a GeoNode site to access various XML documents from OGC-compliant data services.
Views¶
- geonode.proxy.views.proxy
- This view forwards requests without authentication to a URL provided in the request, similar to the proxy.cgi script provided by the OpenLayers project.
- geonode.proxy.views.geoserver
- This view proxies requests to GeoServer. Instead of a URL-encoded URL
parameter, the path component of the request is expected to be a path
component for GeoServer. Requests to this URL require valid authentication
against the Django site, and will use the default
OGC_SERVER
USER
,PASSWORD
andLOCATION
settings as defined in the maps application.
geonode.search
- Provides the GeoNode search functionality.¶
This Django app provides a fast search functionality to GeoNode.
Views¶
- search_api- Builds and executes a search query based on url parameters and returns matching results in requested format.
geonode.security
- GeoNode granular Auth Backend¶
This app provides an authentication backend for use in assigning permissions to individual objects (maps and layers).
settings.py
Entries¶
- LOCKDOWN_GEONODE
Default:
False
By default, the GeoNode application allows visitors to view most pages without being authenticated. Set
LOCKDOWN_GEONODE = True
to require a user to be authenticated before viewing the application.- AUTH_EXEMPT_URLS
Default:
()
(Empty tuple)A tuple of url patterns that the user can visit without being authenticated. This setting has no effect if
LOCKDOWN_GEONODE
is not True. For example,AUTH_EXEMPT_URLS = ('/maps',)
will allow unauthenticated users to browse maps.
Template Tags¶
- geonode_media <media_name>
Accesses entries in MEDIA_LOCATIONS without requiring the view to explicitly add it to the template context. Example usage:
{% include geonode_media %} {% geonode_media "ext_base" %}
- has_obj_perm <user> <obj> <permission>
Checks whether the user has the specified permission on an object.
{% has_obj_perm user obj "app.view_thing" as can_view_thing %}
Django’s error templates¶
GeoNode customizes some of Django’s default error templates.
500.html¶
If no custom handler for 500 errors is set up in the URLconf module, django will call django.views.defaults.server_error which expects a 500.html file in the root of the templates directory. In GeoNode, we have put a template that does not inherit from anything as 500.html and because most of Django’s machinery is down when an INTERNAL ERROR (500 code) is encountered the use of template tags should be avoided.
Make a GeoNode release¶
Making a GeoNode release requires a quite complex preparation of the environment while once everything is set up is a really easy and quick task. As said the complex part is the preparation of the environment since it involves, the generation of a password key to be uploaded to the Ubuntu servers and imported in launchpad.
If you have already prepared the environment then jump to the last paragraph.
Before start, make sure to have a pypi and a launchpad account.
Before doing the release, a GeoNode team member who can already make release has to add you as a launchpad GeoNode team member.
Creating and importing a gpg key¶
A gpg key is needed to push and publish the package. There is a complete guide on how to create and import a gpg key
Preparing the environment¶
Make sure to have a Ubuntu 12.04 machine. Install the following packages in additon to the python virtulenv tools:
$ sudo apt-get install git-core git-buildpackage debhelper devscripts
Get the GeoNode code (from master) in a virtuelnv:
$ mkvirtualenv geonode
$ git clone https://github.com/GeoNode/geonode.git
$ cd geonode
Edit the .bashrc file and add the following lines (the key ID can be found in “your personal keys” tab:
export GPG_KEY_GEONODE="your_gpg_key_id"
export DEBEMAIL=yourmail@mail.com
export EDITOR=vim
export DEBFULLNAME="Your Full Name"
then close and:
$ source .bashrc
Type “env” to make sure all the variables are correctly exported
Set the correct git email:
$ git config --global user.email "yourmail@mail.com"
Register on Pypi with your Pypi credentials:
$ python setup.py register
Make the release¶
The followings are the only commands needed if the environment and the various registrations have already been done.
Make sure to have pulled the master to the desired commit. Edit the file geonode/__init__.py at line 21 and set the correct version.
Install GeoNode in the virtualenv (make sure to have the virtualenv activated and be in the geonode folder):
$ pip install -e geonode
Publish the package:
$ cd geonode
$ paver publish
The last command will:
- Tag the release and push it to GitHub
- Create the debian package and push it at ppa:geonode/testing in launchpad
- Create the .tar.gz sources and push them to Pypi
- Update the changelog and commit it to master
GeoNode on Production¶
Configuring GeoNode for Production
Advanced GeoServer Configuration
GeoSites: GeoNode Multi-Tenancy
Configuring GeoNode for Production¶
This page documents recommendations for configuring GeoNode in production environments. The steps mentioned in the first section are required to run GeoNode, the ones in the second section are either optional or ways to get more performance.
Note
This document makes numerous references to the <host> variable, please replace it with the IP Address of your GeoNode or the DNS entry.
For example: instead of http://<host>/geoserver
, write down: http://mygeonode.com/geoserver
or http://127.0.0.1/geoserver
Set the correct GeoServer Proxy URL value¶
Navigate to http://localhost/geoserver
, log in and click on the Global
link in the Settings section.
Note
The Geoserver default username is admin with geoserver as the password. Change this ASAP.

Find the Proxy Base URL
text field, put the complete address there:
http://<host>/geoserver/

Configure the Printing Module¶
This lives in the GeoServer Data dir /usr/share/geoserver/data/printing/config.yaml
, add your server’s IP address or domain name to the list of exceptions.
Please refer to http://docs.geoserver.org/2.4.x/en/user/datadirectory/index.html for additional information on managing the geoserver data directory:
hosts:
- !dnsMatch
host: YOUR_IP_ADDRESS
port: 80
Recommended Steps (optional)¶
Adding layers from Google, Bing and other providers¶
Bing¶
Get an API key from Microsoft at http://bingmapsportal.com/ and place it in local_settings.py
.:
BING_API_KEY="zxcxzcXAWdsdfkjsdfWWsdfjpowxcxz"
Copy the MAP_BASELAYERS
dictionary from settings.py
into local_settings.py
and add the following snippet:
},{
"source": {
"ptype":"gxp_bingsource",
"apiKey": BING_API_KEY
},
"group":"background",
"name":"Aerial",
"visibility": False,
"fixed": True,
Google¶
Get an API key from Google at http://code.google.com/apis/maps/signup.html and place it in local_settings.py
, for example:
GOOGLE_API_KEY="zxcxzcXAWdqwdQWWQEDzxcxz"
Copy the MAP_BASELAYERS
dictionary from settings.py
into local_settings.py
(or edit the previously copied snippet) and add the following snippet:
},{
"source": {
"ptype":"gxp_googlesource",
"apiKey": GOOGLE_API_KEY
},
"group":"background",
"name":"SATELLITE",
"visibility": False,
"fixed": True,
Sitemaps Configuration¶
GeoNode can automatically generate a sitemap suitable for submission to search engines which can help them to index your GeoNode site more efficiently and effectively.
In order to generate the sitemap properly, the sites domain name must be set
within the sites framework. This requires that an superuser login to the
admin interface and navigate to the sites module and change example.com to the
actual domain name (and port if applicable). The admin interface can be accessed
at http://<host>/admin/sites/site/. Click on the example.com
link, and change
both the Domain name
and Display name
entries to match your system.
It is possible to ‘inform’ google of changes to your sitemap. This is accomplished using the ping_google management command. More information can be found here http://docs.djangoproject.com/en/dev/ref/contrib/sitemaps/#django.contrib.sitemaps.ping_google It is recommended to put this call into a cron (scheduled) job to update google periodically.
Configuring User Registration¶
You can optionally configure GeoNode to allow new users to register through the web. New registrants will be sent an email inviting them to activate their account.
To allow new user registration:
Set up the email backend for Django (see Django documentation) and add the appropriate settings to
./src/GeoNodePy/geonode/local_settings.py
. For example:EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_HOST = 'smtp.gmail.com' EMAIL_HOST_USER = 'foo@gmail.com' EMAIL_HOST_PASSWORD = 'bar' EMAIL_PORT = 587 EMAIL_USE_TLS = True
In the same settings file set:
REGISTRATION_OPEN=True
With the Django application running, set the domain name of the service properly through the admin interface as specified above in the Sitemaps section. (This domain name is used in the account activation emails.).
Restart apache:
$ sudo service apache2 restart
(Optional) Disable automatic approval of new users. Administrators would receive an email and need to manually approve new accounts. For this option to work, an email backed has to be defined in order to email the users with Staff status the notification to approve the new account:
ACCOUNT_APPROVAL_REQUIRED = True
To register as a new user, click the ‘’Register’’ link in the GeoNode index header.
Additional Configuration¶
Some other things that require tweaking:
- Web-accessible uploads directory for user profile photos
Robot Exclusion File¶
GeoNode has several views that require considerable resources to properly respond - for example, the download links on layer detail pages require GeoServer to dynamically generate output in PDF, PNG, etc. format.
Crawlers for web search engines such as Google may cause problems by quickly following many such links in succession.
In order to request that “robots” do not make requests directly to GeoServer, you can ensure that requests to /robots.txt
return a text file with the following content:
User-agent: *
Disallow: /geoserver/
This will only affect automated web agents; web browsers in particular are unaffected.
Memory Management¶
At the time of the GeoNode 1.0 release, the GeoNode manual recommended at least 4GB RAM for servers running GeoNode.
While 4GB physical RAM is sufficient, it is recommended that machines be configured with at least 6GB total virtual memory.
For example, a machine with 4GB physical RAM and 2GB swap space should be able to run GeoNode, but if you would like to run without a swapfile then you should configure the machine with at least 6GB RAM.
On Linux and MacOSX hosts, you can check the available RAM with the following command:
$ free -m
total used free shared buffers cached
Mem: 6096 3863 2232 0 0 0
-/+ buffers/cache: 3863 2232
Swap: 0 0 0
The “total” column lists the available physical memory and swap space in megabytes; adding them together yields the amount of virtual memory available to the system.
In this example, there is no Swap space so that field is 0 and the available RAM on the system is 6096MB (6 GB).
Security Integration Optimization¶
GeoServer delegates authentication and authorization to GeoNode. The default configuration uses an HTTP endpoint in GeoNode to discover the current user and the layers that are accessible. For production, it is advisable to use a database-level connection.
Installing the Stored Procedure¶
The SQL for the stored procedure is distributed with the GeoServer web application archive and can be found at WEB-INF/classes/org/geonode/security/geonode_authorize_layer.sql
in the webapps directory. It can be loaded using the psql command by following these steps (if not using tomcat6 or Ubuntu, locate the webapps directory for your configuration):
$ cd /var/lib/tomcat6/webapps
$ sudo su - postgres
$ psql -d YOUR_DATABASE < geoserver/WEB-INF/classes/org/geonode/security/geonode_authorize_layer.sql
Configuring GeoServer to Use the Database Security Client¶
If a context configuration XML file does not already exist, create one for GeoServer. If using Tomcat6 on Ubuntu, this file resides at /etc/tomcat6/Catalina/localhost/geoserver.xml
. If creating a new file, the following XML should be added (replace ALLCAPS with your specific values):
<Context path="/geoserver"
antiResourceLocking="false" >
<Parameter name="org.geonode.security.databaseSecurityClient.url"
value="jdbc:postgresql://localhost:5432/DATABASE?user=USER&password=PASSWORD"/>
</Context>
If the file exists already, just add the Parameter element from above.
Verification of Database Security Client¶
To verify the settings change, look in the GeoServer logs for a line that notes: “using geonode database security client”. If any issues arise, check your connection configuration as specified in the context file above.
Configuring the Servlet Container¶
GeoServer is the most resource intensive component of GeoNode.
There are some general notes on setting up GeoServer for production environments in the GeoServer manual .
However, the following are some GeoServer recommendations with GeoNode’s specific needs in mind.
JVM Options¶
The JRE used with GeoNode should be that distributed by Oracle.
Others such as OpenJDK may work but Oracle’s JRE is recommended for higher performance rendering.
Startup options should include the following:
-Xmx1024M -Xms1024M -XX:MaxPermSize=256M \
-XX:CompileCommand=exclude,net/sf/saxon/event/ReceivingContentHandler.startEvent
These can be specified using the CATALINA_OPTS
variable in Tomcat’s bin/catalina.sh
file, or the JETTY_OPTS
in Jetty’s bin/jetty.sh
file.
Constrain GeoServer Worker Threads¶
While the JVM provides memory management for most operations in Java applications, the memory used for rendering (in GeoServer’s case, responding to WMS GetMap requests) is not managed this way, so it is allocated in addition to the memory permitted by the JVM options above.
If GeoServer receives many concurrent requests, it may increase the memory usage significantly, so it is recommended to constrain the number of concurrent requests at the servlet container (ie, Jetty or Tomcat).
For Tomcat, you can edit conf/server.xml
. By default, this file contains an entry defining a ContextHandler:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443"/>
Add a maxThreads
attribute to limit the number of threads (concurrent requests) to 50 (the default in Tomcat is 200):
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
redirectPort="8443" maxThreads="50"/>
Note
This configuration is possible in Jetty as well but not yet documented in this manual.
Native JAI and JAI ImageIO¶
Using the native-code implementation of JAI and JAI ImageIO speeds up GeoServer, thereby requiring less concurrency at the same level of throughput.
The GeoServer manual contains platform-specific instructions for configuring JAI and JAI ImageIO.
GeoServer Configuration¶
There are a few controls to be set in the GeoServer configuration itself as well.
On the JAI Settings page¶

There are two considerations for the JAI settings.
- Enable JPEG and PNG Native Acceleration to speed up the performance of WMS requests
- Disable Tile Recycling as this optimization is less relevant on recent JVM implementations and has some overhead itself.
On the WMS Service page¶

There is only one consideration for the Web Map Service page
- Don’t set the “Resource Consumption Limits.” This sounds a bit counter intuitive, but these limits are implemented in an inefficient way such that unless resource-intensive requests are common on your server it is more efficient to avoid the limits. A better implementation of this feature is available for GeoServer 2.1 and will be incorporated in GeoNode 1.1.
Printing with the Mapfish Print Service¶
The GeoNode map composer can “print” maps to PDF documents using the Mapfish print service. The recommended way to run this service is by using the printing extension to GeoServer (if you are using the pre-built GeoNode package, this extension is already installed for you). However, the print service includes restrictions on the servers that can provide map tiles for printed maps. These restrictions have a fairly strict default, so you may want to loosen these constraints.
Adding servers by hostname¶
The Mapfish printing module is configured through a YAML
configuration file, usually named print.yaml
. If you are using the
GeoServer plugin to run the printing module, this configuration file can be
found at GEOSERVER_DATA_DIR/printing/config.yaml
. The default
configuration should contain an entry like so:
hosts:
- !dnsMatch
host: labs.metacarta.com
port: 80
- !dnsMatch
host: terraservice.net
port: 80
You can add host/port entries to this list to allow other servers.
See also
The Mapfish documentation on configuring the print module.
The GeoServer documentation on configuring the print module.
Advanced GeoServer Configuration¶
In this module we are going to cover some advanced configurations and :
Configuring GeoServer for robustness¶
In a production environment may be necessary to properly configure the WMS service in order to give a limit to resources associated with a request. The Resource Limits options allow the administrator to limit the resources consumed by each WMS GetMap request.
GeoServer provides a user interface for these options:

Setting the Resource consumption limits
The following table shows each option name, a description, and the minimum GeoServer version at which the option is available (old versions will just ignore it if set).
Option | Description | Version |
Max rendering memory | Sets the maximum amount of memory, in kilobytes, a single GetMap request is allowed to use. Each output format will make a best effort attempt to respect the maximum using the highest consuming portion of the request processing as a reference. For example, the PNG output format will take into consideration the memory used to prepare the image rendering surface in memory, usually proportional to the image size multiplied by the number of bytes per pixel | 1.7.5 |
Max rendering time | Sets the maximum amount of time, in seconds, GeoServer will use to process the request. This time limits the “blind processing” portion of the request serving, that is, the part in which GeoServer is computing the results before writing them out to the client. The portion that is writing results back to the client is not under the control of this parameter, since this time is also controlled by how fast the network between the server and the client is. So, for example, in the case of PNG/JPEG image generation, this option will control the pure rendering time, but not the time used to write the results back. | 1.7.5 |
Max rendering errors | Sets the maximum amount of rendering errors tolerated by a GetMap. Usually GetMap skips over faulty features, reprojection errors and the like in an attempt to serve the results anyways. This makes for a best effort rendering, but also makes it harder to spot issues, and consumes CPU cycles as each error is handled and logged | 1.7.5 |
Out of the box GeoServer uses 65MB, 60 seconds and 1000 errors respectively. All limits can be disabled by setting their value to 0
.
Once any of the set limits is exceeded, the GetMap operation will stop and a ServiceException
will be returned to the client.
It is suggested that the administrator sets all of the above limits taking into consideration peak conditions. For example, while a GetMap request under normal circumstance may take less than a second, under high load it is acceptable for it to take longer, but usually, it’s not sane that a request goes on for 30 minutes straight. The following table shows some example values for the configuration options above, with explanations of how each is computed:
Option | Value | Rationale |
maxRequestMemory | 65000 | 65MB are sufficient to render a 4078x4078 image at 4 bytes per pixel (full color and transparency), or a 8x8 meta-tile if you are using GeoWebCache or TileCache. Mind the rendering process will use an extra in memory buffer for each subsequent FeatureTypeStyle in your SLD, so this will also limit the size of the image. For example, if the SLD contains two FeatureTypeStyle element in order to draw cased lines for an highway the maximum image size will be limited to 2884x2884 (the memory goes like the square of the image size, so halving the memory does not halve the image size) |
maxRenderingTime | 60 | A request that processes for one minute straight is probably drawing a lot of features independent of the current load. It might be the result of a client making a GetMap against a big layer using a custom style that does not have the proper scale dependencies |
maxRenderingErrors | 1000 | Encountering 1000 errors is probably the result of a request that is trying to reproject a big data set into a projection that is not suited to area it covers, resulting in many reprojection failures. |
Advanced Production GeoServer configuration¶
Most of the GeoServer downloads are geared towards quickly showing off the capabilities, with an array of demos, sample layers, and an embedded servlet container. If you are using GeoServer in a production environment, there are a few things we’d like to recommend. In this section the task is to configure your system to use it in production.
Note
Before you start, ensure that the Web Administrator Interface - Server section has been completed.
Configuring your container for production¶
Note
Most open source Java web containers, such as Tomcat, ship with development mode configurations that allow for quick startup but don’t deliver the best performance.
Make sure that in the ‘setenv.sh’, or ‘setenv.bat’ on Windows machines, file exists the following configuration to set up the Java virtual machine options in your container. Open the ‘setenv.sh/.bat’ file located in ‘<TRAINING_ROOT>’ directory and look at the options:
- -server: Not present among the training options, this option enables the server JVM, which JIT compiles bytecode much earlier, and with stronger optimizations. Startup and first calls will be slower due to JIT compilation taking more time, but subsequent ones will be faster (to give you some numbers, on the same machine a vanilla VM returns GML at 7MB/s speed, a -server one runs at 10MB/s). This option is required only if the JMV does not already get into server mode, which happens on a server opearting system (Linux, Windows server) with at least 2 cores and 2 GB of memory.
Note
This parameter is necessary only for Windows environments of class workstation
- -Xms512m -Xmx512M: give your server memory. By default JVM will use only 64MB of heap. If you’re serving just vector data, you’ll be full streaming, so having much memory won’t help a lot, but if you’re serving coverages JAI will use a cache to avoid hitting the disk often. In this case, give Geoserver at least 256MB or memory, or more if you have plenty of RAM, and go configure the JAI title cache size in the Geoserver configuration panel so that it uses 75% of the heap (0.75). If you have plenty of memory it is suggested to set -Xms to the same value as -Xmx, this will make heap management more stable during heavy load serving. Generally speaking, don’t allocate more than 2GB for the GeoServer heap.
- -XX:MaxPermSize=128m (or more): the permanent generation is the heap portion where the class bytecode is stored. GeoServer uses lots of classes, and it may exhaust that space quickly leading to out of memory errors. If you’redeploying GeoServer along with other applications in the same container or if you need to deploy multiple GeoServer instances inside the same container (e.g., different instances for different customers or similar needs) you better raise up the MaxPermSize to 128m or more.
Warning
In order to obtain best performance, install the native JAI version in your JDK. In particular, installing the native JAI is important for all raster processing, which is used heavily in both WMS and WCS to rescale, cut and reproject rasters. Installing the native JAI is also important for all raster reading and writing, which affects both WMS and WCS. Finally, native JAI is very useful even if there is no raster data involved, as WMS output encoding requires writing PNG/GIF/JPEG images, which are themselves rasters. For more information how to installa JAI and ImageIO see the Installing the native JAI and ImageIO section
Setting up logging for production¶
Note
Logging may visibly affect the performance of your server. High logging levels are often necessary to track down issues, but by default you should run with low ones (and you can switch the logging levels at runtime, so don’t worry about having to stop the server to gather more information). You can change the logging level by going to the GeoServer configuration panel, Server section.
- Go to http://localhost:8083/geoserver and click on the ‘Global’ link in the ‘Settings’ section.
- Select ‘PRODUCTION_LOGGING.properties’ in Logging Profile and click submit.

Set up logging for production
Choosing a service strategy¶
Note
A service strategy is the way we serve the output to the client. Basically, you have to choose between being absolutely sure of reporting errors with the proper OGC codes and serving output quickly.
You can configure the service strategy modifying the web.xml file located in ‘<TOMCAT_HOME>/instances/instance1/webapps/geoserver/WEB-INF’ directory of your GeoServer install:
- Set the ‘serviceStrategy’ param-name with ‘SPEED’.
All the possible strategies are:
- SPEED: serve outputs right away. The fastest strategy, make it unlikely to be able to report proper OGC errors in WFS though (they are reported only if the error occurs before the GML encodingstarts).
- BUFFER: stores the whole result in memory, and then serves it after the output is complete. This ensures proper OGC error reporting, but delays the response quite a bit and will exhaust memory if the response is big.
- FILE: same as buffer, but uses a file storage for buffering. Slower than BUFFER, ensures there won’t be memory issues.
- PARTIAL-BUFFER2: a balance between the two, tries to buffer in memory a few kilobytes of response, then behaves like SPEED.
Configuring all data and metadata to your instance¶
Note
It may be tempting to just skip some of the configuration steps, leave in the same keywords and abstract as the sample. Please do not, as this will only confuse potential users. They will have a list of GeoServers called ‘My GeoServer’.
- Completely fill out the WFS and WMS Contents sections.
- Put in your own URI (such as the name of your website) for the Namespace (Data -> Workspace) and remove the defaults.
- Make sure your datastores all use your URI.
- Remove the sample layers (states, spearfish, New York roads and the like, the easiest way to is go and remove the demo workspaces, everything contained in them will be removed as a result)
Change the administrator password¶
GeoServer ships by default with “admin” and “geoserver” as the default administrator user name and password. Before putting the GeoServer on-line it is imperative to change at least the administrator password.
Making use of an external Data Directory to store your configurations¶
Note
The configuration data resides within the GEOSERVER_DATA_DIR. To increase the portability of their data and to facilitate updates GeoServer, you should place this directory outside of GeoServer editing the web.xml file with the path that you prefer
See the ‘GEOSERVER_DATA_DIR’ context param in ‘<TRAINING_ROOT>/tomcat-6.0.36/instances/instance1/webapps/geoserver/WEB-INF’:
<context-param>
<param-name>GEOSERVER_DATA_DIR</param-name>
<param-value>$GEOSERVER_DATA_DIR</param-value>
</context-param>
Note
The external data dir can be also configured throught the environment variables on the ‘setenv.sh/.bat’ file.
Using a Spatial Database¶
We make shapefiles available as a datastore, as they are such a common format. But if you are running GeoServer in a production environment and if you need to manage the spatial indexes, transactions or if you have specific requirements involving the use of a database, setting up a spatial database and converting your shapefiles is highly recommended. If you’re doing transactions against GeoServer this is essential. Even though we have a very nice transaction framework, doubling up with the native transaction support of relational databases ensures your data integrity. Most all the major spatial dbs provide support to easily turn shapefiles into their native format. We recommend PostGIS, open source extensions to the postgresql db, most of our testing has been performed against it. Oracle, DB2, SQL Server and ArcSDE are also well supported. At the moment we don’t recommend MySQL, as it has trouble with rollbacks on geometry tables, and lacks advanced spatial functionality, but it is an option.
Setting security¶
GeoServer by default includes WFS-T, which lets users modify your backend database. If you don’t want that to happen, you can turn off transactions in the web admin tool, Service Panel -> WFS and set Service Level to Basic. If you’d like some users to be able to modify it, but not all, you’ll have to set up data access level security. For extra security when operating in read only mode, make sure that the connection to the datastore that is open to all is with a user who has read only permissions. That will make it so it’s completely impossible to do a SQL injection (though GeoServer is generally designed well enough that it’s not vulnerable).
Dealing with a locked down environment¶
GeoServer code, and the libraries it uses (Geotools, JAI) are not designed to be run in a security locked down enviroment. They need free access to environment variables, temp directory, user preferences and the like. In operating systems like Ubuntu the default Tomcat is locked down so that most of the above is not authorized. So far, the only way to run Geoserver in that environment is to grant all permissions to it.
Caching¶
Server-side caching of WMS tiles is the best way to get performance. Essentially how the caching works is the server will recognize a request and quickly return a pre-rendered result. This is how you can optimize for tile-based WMS clients and it works the best for them. There are several ways to set up caching for GeoServer like GeoWebCache.
Advanced Coordinate Reference System Handling¶
This section describes how coordinate reference systems (CRS) are handled in GeoServer, as well as what can be done to extend GeoServer’s CRS handling abilities.
Coordinate Reference System Configuration¶
When adding data, GeoServer tries to inspect data headers looking for an EPSG code: if the data has a CRS with an explicit EPSG code and the full CRS definition behind the code matches the one in GeoServer, the CRS will be already set for the data. If the data has a CRS but no EPSG code, you’ll have to manually guess the EPSG code. Browsing to www.spatialreference.org might be a good option to find the exact EPSG code for your data.
If an EPSG code cannot be found, then either the data has no CRS or it is unknown to GeoServer. In this case, there are a few options:
- Force the declared CRS, ignoring the native one. This is the best solution if the native CRS is known to be wrong.
- Reproject from the native to the declared CRS. This is the best solution if the native CRS is correct, but cannot be matched to an EPSG number. An alternative is to add a custom EPSG code that matches exactly the native SRS.
If your data has no native CRS information, the only option is to specify/force an EPSG code.
Custom CRS Definitions¶
This example shows how to add a custom projection in GeoServer.
The projection parameters need to be provided as a WKT (well known text) definition. The code sample below is just an example:
PROJCS["NAD83 / Austin", GEOGCS["NAD83", DATUM["North_American_Datum_1983", SPHEROID["GRS 1980", 6378137.0, 298.257222101], TOWGS84[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]], PRIMEM["Greenwich", 0.0], UNIT["degree", 0.017453292519943295], AXIS["Lon", EAST], AXIS["Lat", NORTH]], PROJECTION["Lambert_Conformal_Conic_2SP"], PARAMETER["central_meridian", -100.333333333333], PARAMETER["latitude_of_origin", 29.6666666666667], PARAMETER["standard_parallel_1", 31.883333333333297], PARAMETER["false_easting", 2296583.333333], PARAMETER["false_northing", 9842500.0], PARAMETER["standard_parallel_2", 30.1166666666667], UNIT["m", 1.0], AXIS["x", EAST], AXIS["y", NORTH], AUTHORITY["EPSG","100002"]]
Note
This code sample has been formatted for readability. The information will need to be provided on a single line instead, or with backslash characters at the end of every line (except the last one).
Go into the
user_projections
directory inside your data directory, and open theepsg.properties
file. If this file doesn’t exist, you can create it.Insert the code WKT for the projection at the end of the file (on a single line or with backslash characters):
100002=PROJCS["NAD83 / Austin", \ GEOGCS["NAD83", \ DATUM["North_American_Datum_1983", \ SPHEROID["GRS 1980", 6378137.0, 298.257222101], \ TOWGS84[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]], \ PRIMEM["Greenwich", 0.0], \ UNIT["degree", 0.017453292519943295], \ AXIS["Lon", EAST], \ AXIS["Lat", NORTH]], \ PROJECTION["Lambert_Conformal_Conic_2SP"], \ PARAMETER["central_meridian", -100.333333333333], \ PARAMETER["latitude_of_origin", 29.6666666666667], \ PARAMETER["standard_parallel_1", 31.883333333333297], \ PARAMETER["false_easting", 2296583.333333], \ PARAMETER["false_northing", 9842500.0], \ PARAMETER["standard_parallel_2", 30.1166666666667], \ UNIT["m", 1.0], \ AXIS["x", EAST], \ AXIS["y", NORTH], \ AUTHORITY["EPSG","100002"]]
Note
Note the number that precedes the WKT. This will determine the EPSG code. So in this example, the EPSG code is 100002.
- Save the file.
- Restart GeoServer.
- Verify that the CRS has been properly parsed by navigating to the srs_list page in the web_admin.
- If the projection wasn’t listed, examine the logs for any errors.
In some situations it is necessary to override an official EPSG code with a custom definition. A common case is the need to change the TOWGS84 parameters in order to get better reprojection accuracy in specific areas.
The GeoServer referencing subsystem checks the existence of another property file, epsg_overrides.properties
, whose format is the same as epsg.properties
. Any definition contained in epsg_overrides.properties
will override the EPSG code, while definitions stored in epsg.properties
can only add to the database.
Special care must be taken when overriding the Datum parameters, in particular the TOWGS84 parameters. To make sure the override parameters are actually used the code of the Datum must be removed, otherwise the referencing subsystem will keep on reading the official database in search of the best Datum shift method (grid, 7 or 5 parameters transformation, plain affine transform).
For example, if you need to override the official TOWGS84 parameters of EPSG:3003 to better match the peninsular area of Italy:
PROJCS["Monte Mario / Italy zone 1",
GEOGCS["Monte Mario",
DATUM["Monte Mario",
SPHEROID["International 1924", 6378388.0, 297.0, AUTHORITY["EPSG","7022"]],
TOWGS84[-50.2, -50.4, 84.8, -0.69, -2.012, 0.459, -5.791915759418465],
AUTHORITY["EPSG","6265"]],
PRIMEM["Greenwich", 0.0, AUTHORITY["EPSG","8901"]],
UNIT["degree", 0.017453292519943295],
AXIS["Geodetic longitude", EAST],
AXIS["Geodetic latitude", NORTH],
AUTHORITY["EPSG","4265"]],
PROJECTION["Transverse Mercator", AUTHORITY["EPSG","9807"]],
PARAMETER["central_meridian", 9.0],
PARAMETER["latitude_of_origin", 0.0],
PARAMETER["scale_factor", 0.9996],
PARAMETER["false_easting", 1500000.0],
PARAMETER["false_northing", 0.0],
UNIT["m", 1.0],
AXIS["Easting", EAST],
AXIS["Northing", NORTH],
AUTHORITY["EPSG","3003"]]
You should write the following (in a single line, here it’s reported formatted over multiple lines for readability):
3003 =
PROJCS["Monte Mario / Italy zone 1",
GEOGCS["Monte Mario",
DATUM["Monte Mario",
SPHEROID["International 1924", 6378388.0, 297.0, AUTHORITY["EPSG","7022"]],
TOWGS84[-104.1, -49.1, -9.9, 0.971, -2.917, 0.714, -11.68],
AUTHORITY["EPSG","6265"]],
PRIMEM["Greenwich", 0.0, AUTHORITY["EPSG","8901"]],
UNIT["degree", 0.017453292519943295],
AXIS["Geodetic longitude", EAST],
AXIS["Geodetic latitude", NORTH]],
PROJECTION["Transverse_Mercator"],
PARAMETER["central_meridian", 9.0],
PARAMETER["latitude_of_origin", 0.0],
PARAMETER["scale_factor", 0.9996],
PARAMETER["false_easting", 1500000.0],
PARAMETER["false_northing", 0.0],
UNIT["m", 1.0],
AXIS["Easting", EAST],
AXIS["Northing", NORTH],
AUTHORITY["EPSG","3003"]]
The definition has been changed in two places, the TOWGS84 paramerers, and the Datum code, AUTHORITY["EPSG","4265"]
, has been removed.
Advanced Database Connection Pooling Configuration¶
- Database connections are valuable resources and as such shall be managed with care:
- they are heavy to create and maintain for the database server itself since they are usually child processes of the DBMS server process
- being processes that means that creating a connection is not a zero-cost process therefore we should avoid creating connections as we need to connect to a DB but we should tend to create them in advance in order to minimize the impact of the time needed to create them on the time needed to serve a request.
- as a consequence of the fact that a connection require non negligible resources on the server DBMS DBAs tend to
- limit the number of connections globally available (e.g. PostgreSQL by default has a limit set to 100)
- limit the lifetime of connections created in order to discourage clients from retaining connections for a really long time
The purpose served by a connection pool is to maintain connections to an underlying database between requests. The benefit is that connection set-up only need to occur once on the first request while subsequent requests use existing connections and achieve a performance benefit as a result.
Ok, now let’s go into GeoServer specifics. In most GeoServer DataStores you have the possibility to use the JNDI [1] or the standard store which basically means you can have GeoServer manage the connection pool for you or you can configure it externally (from within the Application Server of choice) and then have GeoServer lean onto it to get connections. Baseline is, one way or the other you’ll always end-up using a connection pool in GeoServer.
GeoServer Internal Connection Pool Parameters¶
Whenever a data store backed by a database is added to GeoServer an internal connection pool, for which relies on Apache Commons DBCP [2], is created by default. This connection pool is configurable, however let me say this upfront, the number of pool configuration parameters that we expose is a subset of the possible ones, although the most interesting are there. Namely there are a few that you might want to customize. Here below you can find some more details on the available connection parameters.
max connections The maximum number of connections the pool can hold. When the maximum number of connections is exceeded, additional requests that require a database connection will be halted until a connection from the pool becomes available and eventually times out if that’s not possible within the time specified in the connection time-out. The maximum number of connections limits the number of concurrent requests that can be made against the database. min connections
- The minimum number of connections the pool will hold. This number of connections is held even when there are no active requests. When this number of connections is exceeded due to serving incoming requests additional connections are opened until the pool reaches its maximum size (described above). The implications of this number are multiple:
-1- If it is very far from the max connections this might limit the ability of the GeoServer to respond quickly to unexpected or random heavy load situations due to the fact that it takes a non negligible time to create a new connections. However this set up is very good when the DBMS is quite loaded since it tends to use as less connections as possible at all times.
-2- If it is very close to the max connections value the GeoServer will be very fast to respond to random load situation. However in this case the GeoServer would put a big burden on DBMS shoulders as the the poll will try to hold all needed connections at all times.
validate connections Flag indicating whether connections from the pool should be validated before they are used. A connection in the pool can become invalid for a number of reasons including network breakdown, database server timeout, etc.. The benefit of setting this flag is that an invalid connection will never be used which can prevent client errors. The downside of setting the flag is that a small performance penalty is paid in order to validate connections when the connection is borrowed from the pool since the validation is done by sending smal query to the server. However the cost of this query is usually small, as an instance on PostGis the validation query is Select 1. fetch size The number of records read from the database in each network exchange. If set too low (<50) network latency will affect negatively performance, if set too high it might consume a significant portion of GeoServer memory and push it towards an Out Of Memory Error. Defaults to 1000, it might be beneficial to push it to a higher number if the typical database query reads much more data than this, and there is enough heap memory to hold the results connection timeout Time, in seconds, the connection pool will wait before giving up its attempt to get a new connection from the database. Defaults to 20 seconds. This timeout kicks in during heavy load conditions when the number of requests needing a connection to a certain DB outnumber greatly the number of available connections in the pool, therefore some requests might get error messages due to the timeouts while acquiring a connection. This condition is not per se problematic since usually a request does not use a DB connection for its entire lifecycle hence we do not need 100 connections at hand to respond to 100 requests; however we should strive to limit this condition since it would queue threads on the connection pool after they might have allocated memory (e.g. for rendering). We will get back to this later on. max open prepared statements Maximum number of prepared statements kept open and cached for each connection in the pool. max wait number of seconds the connection pool will wait before timing out attempting to get a new connection (default, 20 seconds) validate connection It forces GeoServer to check that the connections borrowed from the pool are valid (i.e. not closed on the DMBS side) before using them. Test while idle Periodically test if the connections are still valid also while idle in the pool. Sometimes performing a test query using an idle connection can make the datastore unavailable for a while. Often the cause of this troublesome behaviour is related to a network firewall placed between Geoserver and the Database that force the closing of the underlying idle TCP connections. Evictor run periodicity Number of seconds between idle object evictor runs. Max connection idle time Number of seconds a connection needs to stay idle before the evictor starts to consider closing it. Evictor tests per run Number of connections checked by the idle connection evictor for each of its runs.
Prepared statements are used by databases to avoid re-planning the data access every time, the plan is done only once up-front, and as long as the statement is cached, the plan does not need to be re-computed.
In business applications fetching a small amount of data at a time this is beneficial for performance, however, in spatial ones, where we typically fetch thousands of rows, the benefit is limited, and sometimes, turns into a performance problem. This is the case with PostGIS, that is able to tune the access plan by inspecting the requested BBOX, and deciding if a sequential scan is preferable (the BBOX really accesses most of the data) or using the spatial index is best instead. So, as a rule of thumb, when working with PostGis, it’s better not to enable prepared statements.
With other databases there are no choices, Oracle currently works only with prepared statements, SQL server only without them (this is often related to implementation limitations than database specific issues).
There is an upside of using prepared statement though: no sql injection attacks are possible when using them. GeoServer code tries hard to avoid this kind of attack when working without prepared statements, but enabling them makes the attack via filter parameters basically impossible.
Summarising, when creating standard DataStores for serving vector data from DBMS in GeoServer you need to remember that internally a connection pool will be created. This approach is the simplest to implement but might lead to an inefficient distribution of the connections between different stores in the following cases:
- if we tend to separate tables into different schemas this will lead to the need for creating multiple stores to serve them out since GeoServer works best if the “schema” parameter is specified, this leading to the creation of (mostly unnecessary) connection pools
- if we want to create stores in different workspaces connecting to the same database this again will lead to unnecessary duplication of connection pools in different store leading to inefficient usage of connections
Long story short the fact that the pool is internal with respect to the stores may lead to inefficient usage of connections to the underlying DBMS since they will be splitted between multiple stores limiting the scalability of each of them: in fact having 100 connections shared between N normal DataStore will impose limits to the maximum number that each can use, otherwise if we managed to keep the connections into a single pool shared, in turn, with the various DataStore we would achieve a much more efficient sharing between the store as, as an instance, a single store under high load could scale to use all the connections available.
Configuration of a JNDI connection pool with Tomcat¶
Many DataStore in GeoServer provide the option of exploiting Java Naming and Directory Interface or JNDI. for managing the connections pools. JNDI allows for components in a Java system to look up other objects and data by a predefined name. A common use of JNDI is to set-up a connection pool in order to improve the database resource management.
In order to set-up a connection pool, Tomcat needs to be provided with a JDBC driver for the database used and the necessary pool configurations. Usually the JDBC driver can be found in the website of the DBMS provider or can be available in the database installation directory. This is important to know since we are not usually allowed to redistribute them.
The JDBC driver for creating connection pool to be shared via JNDI shall be placed in the $TOMCAT_HOME/lib
directory, where $TOMCAT_HOME
is the directory on which Tomcat is installed.
Note
Make sure to remove the JDBC driver from the Geoserver WEB-INF/lib
folder when copied to the Tomcat shared libs, to avoid issues in JNDI DataStores usage.
The configuration is very similar between different databases. Here below some typical examples will be described.
PostgreSQL JNDI Configuration¶
For configuring a PostgreSQL JNDI pool you have to remove the Postgres JDBC driver (it should be named postgresql-X.X-XXX.jdbc3.jar
) from the GeoServer WEB-INF/lib
folder and put it into the TOMCAT_HOME/lib
folder.
The first step to perform for creating a JNDI DataSource (connection pool) is to edit the context.xml file inside $TOMCAT_HOME/conf
directory. This file contains the different JNDI resources configured for Tomcat.
In this case, we are going to configure a JNDI DataSource for a PostgreSQL database. If the file is not present you should create it and add a content similar to the following:
<Context>
<Resource
name="jdbc/postgres" auth="Container" type="javax.sql.DataSource"
driverClassName="org.postgresql.Driver"
url="jdbc:postgresql://localhost:5432/testdb"
username="admin"
password="admin"
maxActive="20"
maxIdle="10"
maxWait="10000"
minEvictableIdleTimeMillis="300000"
timeBetweenEvictionRunsMillis="300000"
validationQuery="SELECT 1"/>
</Context>
Note
If the file is already present, do not add the <Context></Context> labels.
In the previous XML snippet we configured a connection to a PostrgreSQL database called testdb which have the hostname as localhost and port number equal to 5432.
Note
Note that the user shall set proper username and password for the database.
Some of the parameters that can be configured for the JNDI connection pool are as follows:
- maxActive : The number of maximum active connections to use.
- maxIdle : The number of maximum unused connections.
- maxWait : The maximum number of milliseconds that the pool will wait.
- poolPreparedStatements : Enable the prepared statement pooling (very important for good performance).
- maxOpenPreparedStatements : The maximum number of prepared statements in pool.
- validationQuery : (default null) A validation query that double checks the connection is still alive before actually using it.
- timeBetweenEvictionRunsMillis : (default -1) The number of milliseconds to sleep between runs of the idle object evictor thread. When non-positive, no idle object evictor thread will be run.
- numTestsPerEvictionRun : (default 3) The number of objects to examine during each run of the idle object evictor thread (if any).
- minEvictableIdleTimeMillis : : (default 1000 * 60 * 30) The minimum amount of time, in milliseconds, an object may sit idle in the pool before it is eligable for eviction by the idle object evictor (if any).
- removeAbandoned : (default false) Flag to remove abandoned connections if they exceed the removeAbandonedTimout. If set to true a connection is considered abandoned and eligible for removal if it has been idle longer than the removeAbandonedTimeout. Setting this to true can recover db connections from poorly written applications which fail to close a connection.
- removeAbandonedTimeout : (default 300) Timeout in seconds before an abandoned connection can be removed.
- logAbandoned : (default false) Flag to log stack traces for application code which abandoned a Statement or Connection.
- testWhileIdle : (default false) Flag used to test connections when idle.
Warning
The previous settings should be modified only by experienced users. Using wrong low values for removedAbandonedTimeout and minEvictableIdleTimeMillis may result in connection failures; if so try it is important to set-up logAbandoned to true and check your catalina.out log file.
More informations about the parameters can be found at the DBCP documentation.
Launch GeoServer and navigate to the
section.First, choose the PostGIS (JNDI) datastore and give it a name:

PostGIS JNDI Store Configuration
And then you can configure the connection pool:

PostGIS JNDI Store Configuration
When you are doing this, make sure the schema is properly configured, or the DataStore will list all the tables it can find in the schema it is given access to.
Microsoft SQLServer JNDI Configuration¶
Before configuring a SQLServer connection pool you must follow these Guidelines.
Warning
You must remove the sqljdbc.jar
file from the WEB-INF/lib
folder and put it inside the $TOMCAT_HOME/lib
folder.
In this case, we are going to configure a JNDI DataSource for a SQLServer database. You shall create/edit the context.xml file inside $TOMCAT_HOME/conf
directory
with the following lines:
<Context>
<Resource
name="jdbc/sqlserver"
auth="Container"
type="javax.sql.DataSource"
url="jdbc:sqlserver://localhost:1433;databaseName=test;user=admin;password=admin;"
driverClassName="com.microsoft.sqlserver.jdbc.SQLServerDriver"
username="admin"
password="admin"
maxActive="20"
maxIdle="10"
maxWait="10000"
minEvictableIdleTimeMillis="300000"
timeBetweenEvictionRunsMillis="300000"
validationQuery="SELECT 1"/>
</Context>
Note
Note that database name, username and password must be defined directly in the URL.
Launch GeoServer and navigate to the
section.Then choose the Microsoft SQL Server (JNDI) datastore and give it a name:

Microsoft SQLServer JNDI Store Configuration
After, you can configure the connection pool:

Microsoft SQLServer JNDI Store Configuration
Oracle JNDI Configuration¶
Before configuring an Oracle connection pool you should download the Oracle plugin from the GeoServer Download Page
and then put the the ojdbc14.jar
file into the $TOMCAT_HOME/lib
folder.
Warning
You must remove the ojdbc14.jar
file from the WEB-INF/lib
folder and put it inside the $TOMCAT_HOME/lib
folder.
First you must create/edit the context.xml file inside $TOMCAT_HOME/conf
directory with the following lines:
<Context>
<Resource
name="jdbc/oralocal"
auth="Container" type="javax.sql.DataSource"
url="jdbc:oracle:thin:@localhost:1521:xe"
driverClassName="oracle.jdbc.driver.OracleDriver"
username="dbuser"
password="dbpasswd"
maxActive="20"
maxIdle="3"
maxWait="10000"
minEvictableIdleTimeMillis="300000"
timeBetweenEvictionRunsMillis="300000"
poolPreparedStatements="true"
maxOpenPreparedStatements="100"
validationQuery="SELECT SYSDATE FROM DUAL" />
</Context>
Launch GeoServer and navigate to the
section.Then choose the Oracle NG (JNDI) datastore and give it a name:

Oracle JNDI Store Configuration
After, you can configure the connection pool:

Oracle JNDI Store Configuration
Note
In Oracle the schema is usually the user name, upper cased.
Configuring Connection Pools for production usage¶
In general it is important to set the connection waiting time in a way that the connection pool does not become a place where to queue threads executing requests under big load. It is indeed possible that under big load threads executing requests for a vector layer will outnumber the available connections in the pool hence such threads will be blocked trying to acquire a new connection; if the number of connections is much smaller than the number of incoming requests and the max wait time is quite big (e.g. 60 seconds) we will find ourselves in the condition to have many threads waiting for a long time to acquire a connection after most of the resources they need will be allocated, especially the memory back buffer if these are WMS requests.
The max wait time in general shall be set accordingly to the expected maximum execution time for a requests, end-to-end. This include things like, accessing the file system, loading the data. As an instance if we take into account WMS requests we are allowed to specify a maximum response time, therefore if set this to N seconds the max wait time should be set to a value smaller than that since we don’t want to waste resources having threads blocked unnecessarily waiting for a connection. In this case it shall be preferable to fail fast to release resources that might be used unnecessarily otherwise.
- How the data is organized between database, schemas and table impact the degree of flexibility we have when trying to best share connections, regardless of the fact that we were using JNDI pools or not. Summarising:
- Splitting tables in many schemas makes it hard for GeoServer to access table belonging to different schemas unless we switch to JNDI since the schema must be specified as part of the connection params when using internal pools
- Using different users for different schemas prevent JNDI from working efficiently across schemas. It’s best to use when possible a single dedicated account across schemas
- Generally speaking having a complex combination of users and schema can lead to inefficient split of available connections in multiple pools
Long story short, whenever it’s possible strive to make use of a small number of users and if not using JNDI to a small number of schema, although JNDI is a must for organization willing to create a complex set up where different workspaces (i.e Virtual Services) serve the same content differently.
Regardless of how we configure the validation query it is extremely important that we always remember to validate connections before using them in GeoServer; not doing this might lead to spurious errors due to stale connections sitting the pool. This can be achieved with the internal connection pool (via the validate connections box) as well as with the pools declared in JNDI (via the validation query mechanism); it is worth to remind that the latter will account for finer grain configurability.
Footnotes
[1] | http://en.wikipedia.org/wiki/Java_Naming_and_Directory_Interface |
[2] | http://commons.apache.org/proper/commons-dbcp/ |
Installing and Configuring the Monitoring plugin¶
The monitoring extension provides a request monitor for GeoServer. It captures information about each request a GeoServer instance handles and produces reports based on the request data.
Installing the Monitoring Extension¶
Monitoring is an official extension, as such it can be found alongside any GeoServer release. The extension is split into two modules, “core” and “hibernate”, where core provides the basic underpinnings of the module and allows to monitor “live” requests, while the hibernate extension provides database storage of the requests.
- Get the monitoring zip files, already downloaded for you, from the training material
data\plugins
folder (search for zip files containing themonitoring
word, there will be two) - Extract the contents of the archives into the
<TRAINING_ROOT>/tomcat-6.0.36/instances/instance1/webapps/geoserver/WEB-INF/lib
directory of the GeoServer installation.
There are two ways to verify that the monitoring extension has been properly installed.
Start GeoServer and open the Web Administration interface. Log in using the administration account. If successfully installed, there will be a Monitor section on the left column of the home page.
Monitoring section in the web admin interface
Start GeoServer and navigate to the current data directory. If successfully installed, a new directory named
monitoring
will be created in the data directory.
Basic Configuration of the Monitor Extension¶
Many aspects of the monitor extension are configurable. All configuration files
are stored in the data directory under the monitoring
directory:
<data_directory>
monitoring/
db.properties
filter.properties
hibernate.properties
monitor.properties
The function of these files will be discussed below.
The monitoring extension supports different “monitoring modes” that control how request data is captured and stored. Currently three modes are supported:
- history (Default) - Only historical request information is available. No live information is maintained.
- live - Only information about live requests is maintained.
- mixed - A combination of live and history. This mode is experimental.
The mode is set in the monitor.properties
file.
Note
For the Virtual Machine GeoServer instance we are “live” mode.
History mode persists information about all requests in an external database. It does not provide any real time information. This mode is appropriate in cases where a user is most interested in analyzing request history over a given time period.
Live mode only maintains short lived information about requests that are currently executing. It also maintains a small buffer of recent requests. No external database is used with this mode and no information is persisted for the long term.
This mode is most appropriate in cases where a user only cares about what a server is doing in real time and is not interested about request history.
Mixed mode combines both live and history mode in that it maintains both real time information and persists all request data to the monitoring database. This mode however is experimental and comes with more overhead than the other two modes. This is because mixed mode must perform numerous database transactions over the life cycle of a single request (in order to maintain live information), whereas history mode only has to perform a single database transaction for a request.
This mode is most appropriate when both real time request information and request history are desired. This mode is also most appropriate in a clustered server environment in which a user is interested in viewing real time request information about multiple nodes in a cluster.
By default monitored request data is stored in an embedded H2 database located
in the monitoring
directory. This can be changed by editing the
db.properties
file:
# default configuration is for h2
driver=org.h2.Driver
url=jdbc:h2:file:${GEOSERVER_DATA_DIR}/monitoring/monitoring
For example to store request data in an external PostgreSQL database:
driver=org.postgresql.Driver
url=jdbc:postgresql://localhost:5432/monitoring
username=bob
password=foobar
Warning
The above is just an example. Does not match the training users and environment.
By default not all requests are monitored. Those requests excluded include any web admin requests or any monitor HTTP API requests. These exclusions are configured in the filter.properties
file:
/rest/monitor/**
/web
/web/**
These default filters can be changed or extended to filter more types of requests. For example to filter out all WFS requests:
/wfs
The contents of filter.properties
are a series of ant-style patterns that
are applied to the path of the request. Consider the following request:
http://localhost:8083/geoserver/wms?request=getcapabilities
The path of the above request is /wms
. In the following request:
http://localhost:8083/geoserver/rest/workspaces/topp/datastores.xml
The path is /rest/workspaces/topp/datastores.xml
.
In general, the path used in filters is comprised of the portion of the URL
after /geoserver
(including the preceding /
) and before the query string ?
:
http://<host>:<port>/geoserver/<path>?<queryString>
Note
For more information about ant-style pattern matching, see the Apache Ant manual.
Go to the Map Map Preview and open the geosolutions:Counties layer clicking on the
OpenLayer
link.Perform a few times zoom the map.
Use also the GML preview for said layer
Navigate to the Monitor/Reports section
Click on
OWS Request Summary
to show a detailed chart like the following:
Logging all requests on the file system¶
The history mode logs all requests into a database. This can put a very significant strain on the database and can lead to insertion issues as the request table begins to host millions of records.
As an alternative to the history mode it’s possible to enable the auditing logger, which will log the details of each request in a file, which is periodically rolled. Secondary applications can then process these log files and built ad-hoc summaries off line.
The monitor.properties
file can contain the following items to enable and configure file auditing:
Go to the ${GEOSERVER_DATA_DIR}/monitoring and open the
monitor.properties
then append the following configuration:audit.enabled=true audit.path=${TRAINING_ROOT} audit.roll_limit=20
Replace
${TRAINING_ROOT}
with the full path to the workshop root folder, and rememeber to always use forward slashes,/
, in the path, regardless of the operating system. For example, on Windows the path might look likec:/data/Training_2.5.X-32
Go to the Map Map Preview and open the geosolutions:states layer clicking on the
OpenLayer
link.Perform a few times zoom the map.
Open the new created log file (named like
geoserver_audit_yyyymmdd_nn.log
) located at ${TRAINING_ROOT}.Note
- audit.enable: is used to turn on the logger (it is off by default).
- audit.path: is the directory where the log files will be created.
- audit.roll_limit: is the number of requests logged into a file before rolling happens.
Note
The files are also automatically rolled at the beginning of each day.
The log directory will contain a number of log files following the geoserver_audit_yyyymmdd_nn.log
pattern. The nn
is increased at each roll of the file. The contents of the log directory will look like:
geoserver_audit_20110811_2.log
geoserver_audit_20110811_3.log
geoserver_audit_20110811_4.log
geoserver_audit_20110811_5.log
geoserver_audit_20110811_6.log
geoserver_audit_20110811_7.log
geoserver_audit_20110811_8.log
The log contents are driven by three FreeMarker templates. We can customize them to have the log file be a csv file for example.
On the file system navigate to the GeoServer data directory located at $GEOSERVER_DATA_DIR.
In the
monitoring
directory create a new file namedheader.ftl
(is used once when a new log file is created to form the first few lines of the file).Open
header.ftl
in the text editor of your choice and enter the following content:# start time,services,version,operation,url,response content type,total time,response length,error flag
Create another file named
content.ftl
.Open
content.ftl
in the text editor of your choice and enter the following content:${startTime?datetime?iso_utc_ms},${service!""},${owsVersion!""},${operation!""},"${path!""}${queryString!""}",${responseContentType!""},${totalTime},${responseLength?c},<#if error??>failed<#else>success</#if>
Create a last file named
footer.ftl
, and leave its contents emptyRun again a few requests, the log files should contain something like the following now:
# start time,services,version,operation,url,response content type,total time,response lenght,error flag 2012-06-07T10:37:09.725Z,WMS,1.1.1,GetMap,"/geosolutions/wmsLAYERS=geosolutions:ccounties&STYLES=&FORMAT=image/png&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG:4269&BBOX=-106.17254516602,39.489453002927,-105.18378466798,40.054948608395&WIDTH=577&HEIGHT=330",image/png,59,30420,success 2012-06-07T10:37:10.075Z,WMS,1.1.1,GetMap,"/geosolutions/wmsLAYERS=geosolutions:ccounties&STYLES=&FORMAT=image/png&SERVICE=WMS&VERSION=1.1.1&REQUEST=GetMap&SRS=EPSG:4269&BBOX=-105.84010229493,39.543136352537,-105.34572204591,39.825884155271&WIDTH=577&HEIGHT=330",image/png,45,18692,success
How to measure performances with JMeter¶
In this submodule we are going to describe how to use the JMeter tool :
Configuring JMeter for a simple test¶
Apache JMeter is an open source Java desktop application, built to verify functional behavior, perform load tests, and measure performance.
This section explains how to run performance tests using JMeter in order to evaluate the GeoServer performances when serving WMS requests. The performance test aim to stress the server and evaluate the response time and throughput with an increasing number of simulated users sending concurrent request to the server.
Note
Ideally, to avoid adding extra load to the server JMeter should run on a different machine.
Warning
If you have performed the exercises in the security section, please go back to the layer and service security pages and open access to everybody, on all data and all services, before performing the exercises in this section
From the training root, on the command line, run
jmeter.bat
(orjmeter.sh
if you’re on Linux) to start JMeter:jMeter interface
Add a new Thread Group with the mouse right click on
Test Plan
tree node:Adding a new Thread Group
Add a new Loop Controller with the mouse right click on
Thread Group
tree node:Adding a new Loop Controller
In the
Thread Group
panel set the number of thread for the test to 4 (this represents the number of simultaneous requests that are made to GeoServer) and the ramp-up period to 60. Also, chekForever
on the Loop Count field.Setting the Thread Group
Right click on the
Loop Controller
tree node and add a new HTTP Request element:Adding a new HTTP Request
Add the following listeners by right clicking on
Test Plan
tree node:View results Tree
,Summary Report
,Graph results
Adding a Listeners
In the HTTP Request enter the following basic configuration:
Field Value Server Name or IP localhost Port Number 8083 Path geoserver/ows HTTP Request host/port/path configuration
From the training data dir root, open the
data/jmeter_data/jmeter_request_params.txt
, select and copy its contents in the clipboard, then click “Add from Clipboard” in the “HTTP request” panel to setup a sample GetMap request:
![]()
HTTP parameters configuration
At this point jMeter is configured to run a GeoServer performance test:
Select on Thread Group tree node and afted click on
Run
tool and select Start to startig the jMeter test.starting jMeter test
Select
View Results Tree
to directly see the request informations produced and the request result:The View Results Tree panel
Select
Suymmary report
to view the statistical information about the requests:The Aggregate Graph panel
Select
Graph Results
to analyze the technical trend of the requests:The Spline Visualizer panel
Configuring JMeter for a Multiscale test¶
This chapter explains how to create a custom randomized Multiscale test with a set of multiple concurrent threads.
In the first paragraph is described how to generate a CSV file for randomized requests at different scales. In the second one is shown how to configure a new JMeter test with multiple simultaneous threads.
Open the file
gdal.bat
under $TRAINING_ROOT folder inside the training home folder.Run:
cd %TRAINING_ROOT%\geoserver_data\data\boulder gdalinfo srtm_boulder.tiff
The output of the command will be something like this:
Driver: GTiff/GeoTIFF Files: srtm_boulder.tiff Size is 2520, 1800 Coordinate System is: GEOGCS["WGS 84", DATUM["WGS_1984", SPHEROID["WGS 84",6378137,298.257223563, AUTHORITY["EPSG","7030"]], AUTHORITY["EPSG","6326"]], PRIMEM["Greenwich",0], UNIT["degree",0.0174532925199433], AUTHORITY["EPSG","4326"]] Origin = (-105.700138888888890,40.300138888888888) Pixel Size = (0.000277777777778,-0.000277777777778) Metadata: AREA_OR_POINT=Area Image Structure Metadata: INTERLEAVE=BAND Corner Coordinates: Upper Left (-105.7001389, 40.3001389) (105d42' 0.50"W, 40d18' 0.50"N) Lower Left (-105.7001389, 39.8001389) (105d42' 0.50"W, 39d48' 0.50"N) Upper Right (-105.0001389, 40.3001389) (105d 0' 0.50"W, 40d18' 0.50"N) Lower Right (-105.0001389, 39.8001389) (105d 0' 0.50"W, 39d48' 0.50"N) Center (-105.3501389, 40.0501389) (105d21' 0.50"W, 40d 3' 0.50"N) Band 1 Block=256x256 Type=Int16, ColorInterp=Gray Overviews: 1260x900, 630x450, 315x225, 158x113, 79x57, 40x29
The information needed for create a multiscale CSV file are:
Tile Size 256 x 256 Pixel Size 0.000277777777778 Bounding Box ((-105.7001389, -105.0001389), (39.8001389, 40.3001389)) Run:
cd %TRAINING_ROOT%\data\jmeter_data wms_request.py -count 100 -region -105.7 39.8 -105.0 40.3 -minres 0.00028 -maxres 0.00224 -minsize 256 256 -maxsize 1024 1024 > multiscale.csv
wms_request.py is a python script which generates randomized request at different bounding box and resolutions. The parameters are described in the following table:
Parameter Description count Indicates the number of requests to generate region Indicates the maximum bounding box of each request minres/maxres Indicates the minimum and maximum value for the Pixel Size to request (Tipically it should be at least the minimum resolution) minsize/maxsize Indicates the minimum and maximum dimensions of the requested image (Tipically it should be at least as big as the tile size) The CSV file is structured following the rule
$width;$height;$bbox
.For example
290;444;-105.5904,39.910198,-105.48776,40.067338
indicates a request of size 290x444 and Bounding box [-105.5904,39.910198,-105.48776,40.067338].JMeter must be configured for parsing the CSV file correctly by using the
CSV Data Set Config
element.
From the training root, on the command line, run
jmeter.bat
(orjmeter.sh
if you’re on Linux) to start JMeter:jMeter interface
Add 3 new Thread Group called 1, 2, 4
For each Thread Group set the
Number of Thread(users)
field equal to the Thread Group name, theramp-up period
andLoop Count
fields to 1.Setting the Thread Group
In the Test Plan section, check the Run Thread Groups consecutively checkbox
Add a new Loop Controller for each Thread Group object:
Each Loop Controller should be configured following this schema:
- Thread Group
- 1
- Loop Controller –> Loop Count
- 100
- Thread Group
- 2
- Loop Controller –> Loop Count
- 50
- Thread Group
- 4
- Loop Controller –> Loop Count
- 50
Right click on each
Loop Controller
tree node and add a new HTTP Request element with the same name of the Thread Group:Setting the HTTP Request
In each HTTP Request add the following fields to the panel:
Name Value Encode? Include Equals? bbox ${bbox} unchecked checked height ${height} unchecked checked width ${width} unchecked checked Which should look like in the picture
HTTP Request panel configuration
Uncheck the
Follow Redirects
andUse KeepAlive
checkboxRight click on each
Loop Controller
tree node and add a new CSV Data Set Config element:Setting the CSV Data Set Config
Configure the
CSV Data Set Config
element by adding the path of the CSV file created before and setting the variable definitions:Configuring the CSV Data Set Config
From the
Test Plan
tree node add an HTTP Request Default element and enter the following basic configuration:Field Value Server Name or IP localhost Port Number 8083 Path geoserver/ows It should look like this:
HTTP Default Request host/port/path configuration
From the training data dir root, open the
data/jmeter_data/jmeter_request_params_2.txt
, select and copy its contents in the clipboard, then click “Add from Clipboard” in the “HTTP request” panel to setup a sample GetMap request:Add the following listeners by right clicking on
Test Plan
tree node: “View results Tree”, “Summary Report”Add the following assertions by right clicking on
Test Plan
tree node: “Response Assertion”Adding Assertions
Note
Using
Assertions
is helpful because it avoids to continuously do a visual check on the results.Configure the “Response Assertion” following this table:
Field Value Apply to Main sample only Response field to test Response Headers Pattern Matching Rules Contains In the
Pattern to test
panel add:Content-Type: image/png
The final result should look like in the picture:
Configuring Response Assertion
At this point jMeter is configured to run a GeoServer performance test:
Select the Test Plan tree node and select
Run
-Start
from the top menu to start the jMeter test.starting jMeter test
Select
View Results Tree
to directly see the request information produced and the requests results:The View Results Tree panel with a sample request
Another request with different resolution and bounding box
Select
Summary report
to view the statistical information about the requests:Suymmary report panel
Configuring JMeter for testing Raster optimization¶
The following section explains how the GeoServer performances improves with the optimization of raster files.
Optimization has already been discussed in the Introduction To Processing With GDAL sections, describing the most common techniques used.
Note
This section requires the layers published in the Adding an Image Mosaic, Introduction To Processing With GDAL and Advanced Mosaics Configuration sections.
Go to
$TRAINING_ROOT/data/jmeter_data
and copy the filetemplate.jmx
file, creating amosaic.jmx
fileFrom the training root, on the command line, run
jmeter.bat
(orjmeter.sh
if you’re on Linux) to start JMeterOn the top left go to File –> Open and search for the new jmx file copied
Disable the Thread Groups 8, 16, 32, 64 by right-clicking on them and selecting Disable.
In the
CSV Data Set Config
element of the remaining thread groups, modify the path of the CSV file by setting the path for the fileoptimized.csv
in the$TRAINING_ROOT/data/jmeter_data
directoryIn the HTTP Request Default element modify the following parameters:
Name Value layers geosolutions:boulder_bg srs EPSG:26913
At this point jMeter is configured to run a GeoServer performance test:
Run the test
Note
Remember to run and stop the test a few times for having stable results
You should see something like this:
`View Results Tree` panel
When the test is completed, Save the results in a text file and remove them from the console by clicking on Run –> Clear All on the menu
In the HTTP Request Default section modify the following parameter:
Name Value layers geosolutions:boulder_bg_optimized Run the test again
Compare the results of this test with the ones saved before. You should see that throughput is increased with the optimized Mosaic
Configuring JMeter for testing Vector data¶
The following section compare vector data preparation using Shapefile and PostGis. For this example a Shapefile containing primary or secondary roads is used.
The purpose is to test the throughput between the shapefile and an optimized database containing the same data. The result will demonstrate that database optimization can provide a better throughput than the one of the shapefile
Open the terminal and go to the
%TRAINING_ROOT%
Load the shapefile
tl_2014_01_prisecroads
located in%TRAINING_ROOT%\data\user_data
into PostGIS with the following commands:setenv.bat createdb -U postgres -T postgis20 shape2 shp2pgsql -k -I "data\user_data\tl_2014_01_prisecroads\tl_2014_01_prisecroads.shp" public.pgroads | psql -U postgres -d shape2
Note
More information can be found at Loading a Shapefile into PostGIS
On the
%TRAINING_ROOT%
run pgAdmin.batGo to the table
pgroads
inside databaseshape2
and execute the following SQL script for creating an index on the MTFCC column:CREATE INDEX mtfcc_idx ON pgroads ("MTFCC");
Create a new index
The following index optimizes the access to the database when filtering on the MTFCC column.
On your Web browser, navigate to the GeoServer Welcome Page.
Following the instructions on Adding a Postgis layer, configure the database
shape2
in GeoServer and call it pgroadsNote
Note that the database Coordinate Reference System is
EPSG:4269
Configure the shapefile
tl_2014_01_prisecroads
used before in GeoServer following the instructions in Adding a Shapefile and call it allroadsNote
Note that the shapefile Coordinate Reference System is
EPSG:4269
Go to Styles and click on
Add new Style
On the bottom of the page, click on Choose File and select the SLD file called
shproads
in the$TRAINING_ROOT/data/jmeter_data
directoryClick on Upload and then on Submit. This new style supports scale dependency which is used as filter on the roads to display.
Go to
$TRAINING_ROOT/data/jmeter_data
and copy the filetemplate.jmx
file creating avector.jmx
file.From the training root, on the command line, run
jmeter.bat
(orjmeter.sh
if you’re on Linux) to start JMeterOn the top left go to File –> Open and search for the new jmx file copied
In the
CSV Data Set Config
element, modify the path of the CSV file by setting the path for the fileshp2pg.csv
in the$TRAINING_ROOT/data/jmeter_data
directoryIn the HTTP Request Default element modify the following parameters:
Name Value layers geosolutions:allroads srs EPSG:4269 styles shproads
Run the test. You should see something like this:
Sample request on the Shapefile
Note
Remember to run and stop the test a few times for having stable results
When the test is completed, Save the results in a text file.
Remove the result from JMeter by clicking on Run –> Clear All on the menu
In the HTTP Request Default element modify the following parameter:
Name Value layers geosolutions:pgroads Run the test again. It should be noted that database throughput is greater than that of the Shapefile, because the new index created provides a faster access on the database, improving GeoServer performances
Configuring JMeter for testing Style optimization¶
The following section explains how GeoServer performances are improved when using optimized styles. Styling is an important feature for GeoServer, but requires some attention in order to avoid slowing down the performances.
This tutorial is aimed to show how GeoServer performances change by choosing a different style for the same data set using JMeter.
Note
This example requires to have already completed the first 9 steps of the Creating a Base Map with a Layer Group section, Adding a Shapefile and Adding a Style sections .
On your Web browser, navigate to the GeoServer Welcome Page.
Go to Styles and click on
Add new Style
On the bottom of the page, click on Choose File and select the SLD file called
line_label
in the$TRAINING_ROOT/data/jmeter_data
directoryClick on Upload and then on Submit. Now we have a style which supports labeling but has no control on the label conflicts and overlapping
Return to the GeoServer Welcome Page.
Go to Layer Groups and click on
test
Add a new Layer to the Layer Group called bbuildings
Add a new Layer to the Layer Group
Change the associated styles by clicking on each style and choosing another one on the list. Use the following styles:
Layer Style geosolutions:Mainrd line_label geosolutions:BoulderCityLimits polygon geosolutions:bplandmarks polygon geosolutions:bbuildings polygon Styles configuration
Click on Save. With this configuration we have a Layer Group composed by 4 Layers with 4 bad styles associated. This will result in a low throughput, if compared to that of the test with optimized styels.
Go to
$TRAINING_ROOT/data/jmeter_data
and copy the filetemplate.jmx
file and create astyles.jmx
oneFrom the training root, on the command line, run
jmeter.bat
(orjmeter.sh
if you’re on Linux) to start JMeterOn the top left go to File –> Open and search for the new jmx file copied
Disable Thread Group 8, 16, 32 and 64
In the
CSV Data Set Config
element, modify the path of the CSV file by setting the path for the filestyle.csv
in the$TRAINING_ROOT/data/jmeter_data
directoryIn the HTTP Request Default element modify the following parameters:
Name Value layers test srs EPSG:2876
Run the test. You should see something like this:
View Results Tree panel with a bad styling
Note
Remember to run and stop the test a few times for having stable results
When the test is completed, Save the results in a text file.
Remove the result from JMeter by clicking on Run –> Clear All on the menu
Go to Layer Groups and click on
test
Change the associated styles by clicking on each style and choosing another one on the list. Use the following styles:
Layer Style geosolutions:Mainrd mainrd geosolutions:BoulderCityLimits citylimits geosolutions:bplandmarks arealandmarks geosolutions:bbuildings buildings Styles configuration
Click on Save. The new styles contain scale dependencies and label optimization, which will result in a better throughput.
Run again the test.
View Results Tree panel with good styling
You may see that the throughput is greater than that of the first test. The use of scale dependencies reduces the layers to see at lower zoom levels while conflict resolution avoids to show multiple overlapping label at each zoom level.
Configuring JMeter for testing GeoWebCache fullWMS support¶
The following section compare GeoServer WMS with GeoWebCache fullWMS
support. FullWMS
is a new feature which allows GeoWebCache to act as a WMS endpoint, like GeoServer.
Using GeoWebCache, the server is able to cache the requested tiles in order to return them faster then GeoServer.
This example will show how to configure GeoWebCache with fullWMS support and how the performances are improved.
On your Web browser, navigate to the GeoServer Welcome Page.
Go to Gridsets and click on
Create a new gridest
Call it
EPSG_2876
and setEPSG:2876
as Coordinate Reference SystemClick on Compute from maximum extent of CRS and add 15 new Zoom Levels (from 0 to 14)by clicking on Add zoom level. It should look like this picture:
Create a new Gridset
Click on Save. Now this GridSet can be added to the Layer Group
boulder
for caching it with GeoWebCacheGo to Layer Groups and click on
boulder
On the Available gridsets panel add the gridset
EPSG_2876
from the list. Then click on Save.Add the new Gridset
Note
Remember to set Published zoom levels to
Min
andMax
Go to
$TRAINING_ROOT/data/jmeter_data
and copy the filetemplate.jmx
file intogwc.jmx
From the training root, on the command line, run
jmeter.bat
(orjmeter.sh
if you’re on Linux) to start JMeterOn the top left go to File –> Open and search for the new jmx file copied
Disable all the Thread Groups except for 8
Disable the Content-Type Check
In the
CSV Data Set Config
element, modify the path of the CSV file by setting the path for the filegwc.csv
in the$TRAINING_ROOT/data/jmeter_data
directoryIn the HTTP Request Default element modify the following parameters:
Name Value layers boulder srs EPSG:2876
Run the test
Note
Remember to run and stop the test a few times for having stable results
When the test is completed, Save the results in a text file.
Remove the result from JMeter by clicking on Run –> Clear All on the menu
Stop GeoServer
Go to
$TRAINING_ROOT/data/gwc/geowebcache.xml
and add the following snippet:<gwcConfiguration> ... <fullWMS>true</fullWMS> </gwcConfiguration>
Setting
fullWMS
to true allows GeoWebCache to use fullWMS supportRestart GeoServer
On the JMeter HTTP Request Default panel, change the Path from
geoserver/ows
togeoserver/gwc/service/wms
in order to execute WMS requests directly to GeoWebCache, without passing from GeoServerAdd a new parameter called hints which can have 3 values
speed
,default
andquality
. The first one should be used for having a faster response without concerning about image quality; the last one, instead, is slower but with a better quality; the second one is a good trade off between them. For the first test set hints tospeed
.Run the test
Note
At the first run, the throughput should be lower than that of GeoServer, because GeoWebCache has spent much time on generating the cached tiles.
Remove the result from JMeter by clicking on Run –> Clear All on the menu
Run the same test again.
Now the throughput should be improved, because GeoWebCache have already generated the tiles to cache and can reuse them. Image quality should be very poor because of the
hints=speed
configuration.Result from GeoWebCache fullWMS with hints=speed
Run the same test with
hints=default
Result from GeoWebCache fullWMS with hints=default
Run the same test with
hints=quality
Result from GeoWebCache fullWMS with hints=quality
It should be noted that changing the
hints
parameter changes the image quality, but the throughput is always greater than that of GeoServer WMS
Configuring JMeter for testing WMS Resource Limits¶
The following section explains how GeoServer performances are improved when setting the resource limits for WMS.
Open your Web browser and navigate to the GeoServer Welcome Page.
Go to Stores and select
storms
DataStoreChange the following parameters:
Name Value max connections 1 Connection timeout 20000000 It should appear something like this:
Change `storms` parameters
Now you have configured this store to enqueue all the requests in a single queue until they are not timed out.
Go to
$TRAINING_ROOT/data/jmeter_data
and copy the filetemplate.jmx
file and createlimit.jmx
From the training root, on the command line, run
jmeter.bat
(orjmeter.sh
if you’re on Linux) to start JMeterOn the top left go to File –> Open and search for the new jmx file copied
Disable all the Thread Groups except for the 64 one in order to create a test environment with multiple concurrent requests to be enqueued.
In the
CSV Data Set Config
element, modify the path of the CSV file by setting the path for the filelimits.csv
in the$TRAINING_ROOT/data/jmeter_data
directoryIn the HTTP Request Default element modify the following parameters:
Name Value layers geosolutions:storm_obs srs EPSG:4326
Run the test
Note
Remember to run and stop the test a few times for having stable results
You should see something like this:
View Results Tree
When the test is completed, Save the results in a text file and remove them from the console by clicking on Run –> Clear All on the menu
On your Web browser, navigate to the GeoServer Welcome Page.
Go to WMS and edit the Raster Rendering Options section:
Name Value Max rendering time 10 Changing WMS limit configuration
With this option, GeoServer will cut off all the requests that need more than 10s to be rendered, making GeoServer more responsive. Note that this will result in various error returned by GeoServer for those operations which are cut off. You can choose another value to set. For having a good result you should take a value minor than the average response time of the first test.
Run again the test. You should see multiple errors like this:
Exceptions caused by maximum rendering limit exceeded
You may see that the throughput is increased because most of the timed out requests have been removed. With this kind of configuration you can control the responsiveness of your GeoServer by removing stale requests instead of waiting for them.
Note
At the end of the test remove the limits and restore the previous configuration of the storms
DataStore
Configuring JMeter for testing Control Flow plugin¶
This section explains how GeoServer performances are improved when using Control-Flow plugin.
This plugin avoid GeoServer to execute too many requests together, which could lead to bad performances, by reducing the number of concurrent operations to execute and appending the others to a queue. This behaviour improves GeoServer scalability.
Note
This example requires to have already completed Adding a ShapeFile and Adding a Style sections.
Go to
$TRAINING_ROOT/data/jmeter_data
and copy the filetemplate.jmx
file intocontrolflow.jmx
From the training root, on the command line, run
jmeter.bat
(orjmeter.sh
if you’re on Linux) to start JMeterOn the top left go to File –> Open and search for the new jmx file copied
Disable View Results Tree section
In the
CSV Data Set Config
element, modify the path of the CSV file by setting the path for the filecontrolflow.csv
in the$TRAINING_ROOT/data/jmeter_data
directoryIn the HTTP Request Default element modify the following parameters:
Name Value layers geosolutions:Mainrd srs EPSG:2876
Run the test
Note
Remember to run and stop the test a few times for having stable results
When the test is completed, Save the results in a text file.
You should notice that the throughput initially increases and then starts to decrease. This is associated to a bad scalability of the input requests. Remember which number of threads provides better throughput (it should be 8). This value indicates the maximum number of concurrent requests that the server can execute simultaneously.
Decreased throughput (Note the results may be different in other machines)
Remove the result from JMeter by clicking on Run –> Clear All on the menu
Stop GeoServer
Go to
$TRAINING_ROOT/data/plugins/not_installed
and copygeoserver-2.6-SNAPSHOT-control-flow-plugin.zip
zip file inside$TRAINING_ROOT/tomcat-6.0.36/instances/instance1/webapps/geoserver/WEB-INF/lib
Unzip the content of
geoserver-2.6-SNAPSHOT-control-flow-plugin.zip
inside the same folderGo to
$TRAINING_ROOT/geoserver_data
and create a new file calledcontrolflow.properties
and add the following snippet# don't allow more than 8 WMS GetMap in parallel ows.wms.getmap=8
This code snippet indicates that no more than 8 GetMap request can be executed simultaneously by the WMS service. Other informations about the configuration can be found in the next section
Note
If during your test you have found another number for the maximum throughput, you should set that value instead of 8
Restart GeoServer
Run again the test.
You may see that the throughput is no more reduced after the control-flow configuration, because the input requests are scheduled by the control-flow plugin, improving GeoServer scalability.
Stable throughput (Note the results may be different in other machines)
Configuring JMeter for testing the Marlin renderer¶
This section explains how GeoServer performances are improved when using the Marlin renderer.
The Oracle JDK and OpenJDK come with two different anti-aliased renderers:
- Oracle JDK uses Ductus, a fast native renderer that has scalability issues (good for desktop use, less so on the server side)
- OpenJDK uses Pisces, a pure java renderer that is not as fast as “Ductus”, but has good scalability (anecdotally, it becomes faster than Ductus above the 4 concurrent requests)
The Marlin renderer is an improved version of Pisces that is as fast, if not faster, than Ductus, and scales up just as well as Pisces.
Go to
$TRAINING_ROOT/data/jmeter_data
and copy the filetemplate.jmx
file creating amarlin.jmx
fileFrom the training root, on the command line, run
jmeter.bat
(orjmeter.sh
if you’re on Linux) to start JMeterOn the top left go to File –> Open and search for the new jmx file copied
Disable View Results Tree section
In the
CSV Data Set Config
element, modify the path of the CSV file by setting the path for the filecontrolflow.csv
in the$TRAINING_ROOT/data/jmeter_data
directoryIn the HTTP Request Default element modify the following parameters:
Name Value layers boulder srs EPSG:2876
Go and remove the contro
Run the test
Note
Remember to run and stop the test a few times for having stable results
When the test is completed, Save the results in a text file.
Throughput without Marlin (Note the results may be different in other machines)
Remove the result from JMeter by clicking on Run –> Clear All on the menu
Stop GeoServer
Stop GeoServer
Download the Marlin rasterizer library at https://github.com/bourgesl/marlin-renderer/releases/download/v0.4.4/marlin-0.4.4.jar and save it in
%TRAINING_ROOT%\data
Open
%TRAINING_ROOT%\setenv.bat
and add the following lines to enable the Marlin renderer, right before the “Tomcat options for the JVM” section:REM Marlin support set JAVA_OPTS=%JAVA_OPTS% -Xbootclasspath/p:"%ROOT%\data\marlin-0.4.4.jar" set JAVA_OPTS=%JAVA_OPTS% -Dsun.java2d.renderer=org.marlin.pisces.PiscesRenderingEngine
Start GeoServer again
Go to the map preview and open the
boulder
layer, you should see the following in the Tomcat console:INFO: =============================================================================== INFO: Marlin software rasterizer = ENABLED INFO: Version = [marlin-0.4.4] INFO: sun.java2d.renderer = org.marlin.pisces.PiscesRenderingEngine INFO: sun.java2d.renderer.useThreadLocal = true INFO: sun.java2d.renderer.useRef = soft INFO: sun.java2d.renderer.pixelsize = 2048 INFO: sun.java2d.renderer.subPixel_log2_X = 3 INFO: sun.java2d.renderer.subPixel_log2_Y = 3 INFO: sun.java2d.renderer.tileSize_log2 = 5 INFO: sun.java2d.renderer.useFastMath = true INFO: sun.java2d.renderer.useSimplifier = false INFO: sun.java2d.renderer.doStats = false INFO: sun.java2d.renderer.doMonitors = false INFO: sun.java2d.renderer.doChecks = false INFO: sun.java2d.renderer.useJul = false INFO: sun.java2d.renderer.logCreateContext = false INFO: sun.java2d.renderer.logUnsafeMalloc = false INFO: ===============================================================================
Run again the test.
You may see that the throughput got significantly higher, especially at mid-high thread counts
Throughput with Marlin (Note the results may be different in other machines)
Configuring the Control flow plugin¶
The control-flow
module for GeoServer allows the administrator to control the amount of concurrent requests actually executing inside the server.
This kind of control is useful for a number of reasons:
- Performance: tests show that, with local data sources, the maximum throughput in GetMap requests is achieved when allowing at most 2 times the number of CPU cores requests to run in parallel.
- Resource control: requests such as GetMap can use a significant amount of memory. The WMS request limits allow to control the amount of memory used per request, but an
OutOfMemoryError
is still possible if too many requests run in parallel. By controlling also the amount of requests executing it’s possible to limit the total amount of memory used below the memory that was actually given to the Java Virtual Machine. - Fairness: a single user should not be able to overwhelm the server with a lot of requests, leaving other users with tiny slices of the overall processing power.
The control flow method does not normally reject requests, it just queues up those in excess and executes them late. However, it’s possible to configure the module to reject requests that have been waited in queue for too long.
Rule syntax reference¶
The current implementation of the control flow module reads its rules from a controlflow.properties
property file located in the GeoServer data directory.
The global number of OWS requests executing in parallel can be specified with:
ows.global=<count>
Every request in excess will be queued and executed when other requests complete leaving some free execution slot.
A per request type control can be demanded using the following syntax:
ows.<service>[.<request>[.<outputFormat>]]=<count>
Where:
<service>
is the OWS service in question (at the time of writing can bewms
,wfs
,wcs
)<request>
, optional, is the request type. For example, for thewms
service it can beGetMap
,GetFeatureInfo
,DescribeLayer
,GetLegendGraphics
,GetCapabilities
<outputFormat>
, optional, is the output format of the request. For example, for thewms
GetMap
request it could beimage/png
,image/gif
and so on
A few examples:
# don't allow more than 16 WCS requests in parallel
ows.wcs=16
# don't allow more than 8 GetMap requests in parallel
ows.wms.getmap=8
# don't allow more than 2 WFS GetFeature requests with Excel output format
ows.wfs.getfeature.application/msexcel=2
This avoid a single user to make too many requests in parallel:
user=<count>
Where <count>
is the maximum number of parallel requests a single user can execute in parallel. The user tracking mechanism is cookie based, so it will work fine for browsers but not as much for other kinds of clients. An IP based mechanism is not provided at the time, but it would have its own fallacies as well, as it would limit all the users sitting behind a single router to <count>
requests (imagine the effect on a big public administration).
A request timeout is specified with the following syntax:
timeout=<seconds>
where <seconds>
is the number of seconds a request can stay queued waiting for execution. If the request does not enter execution before the timeout expires it will be rejected.
A complete example¶
Assuming the server we want to protect has 4 cores a sample configuration could be:
# if a request waits in queue for more than 60 seconds it's not worth executing,
# the client will likely have given up by then
timeout=60
# don't allow the execution of more than 100 requests total in parallel
ows.global=100
# don't allow more than 10 GetMap in parallel
ows.wms.getmap=10
# don't allow more than 4 outputs with Excel output as it's memory bound
ows.wfs.getfeature.application/msexcel=4
# don't allow a single user to perform more than 6 requests in parallel
# (6 being the Firefox default concurrency level at the time of writing)
user=6
Running GeoNode under SSL¶
Enabling SSL will encrypt traffic between your GeoNode server and client browsers. This approach involves re-configuring Apache to serve on port 443, instead of port 80. Other approaches exist and should be added to this document.
Generate SSL Key & Certificate¶
The first step is to generate a DES key.:
# for CommonName use GeoNode domain name or ip address as specified in GeoNode's SITEURL
openssl genrsa -des3 -out server.key 1024
openssl req -new -key server.key -out server.csr
# generate new server.key without challenge password, or Apache will ask for password at startup
mv server.key server.key.tmp
openssl rsa -in server.key.tmp -out server.key
# generate certificate
openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Copy the key and certificate to the standard locations:
sudo cp server.crt /etc/ssl/certs/geonode.crt
sudo cp server.key /etc/ssl/private/geonode.key
Next add the certificate to the cacerts file for python and java:
sudo -s "cat server.crt >> /var/lib/geonode/lib/python2.6/site-packages/httplib2/cacerts.txt"
sudo keytool -import -alias geonodessl -keystore /etc/ssl/certs/java/cacerts -file server.crt
Note keytool will ask for a password and the standard password for the java cacerts file is changeit
.
Apache Configuration¶
Enable the ssl module in Apache with the command:
sudo a2enmod ssl
Next as root edit the Apache geonode config file /etc/apache2/sites-available/geonode
. At the beginning of the file replace:
<VirtualHost *:80>
with:
<IfModule mod_ssl.c>
<VirtualHost _default_:443>
At the bottom of the file, replace:
</VirtualHost>
with:
SSLEngine on
SSLCertificateFile /etc/ssl/certs/geonode.crt
SSLCertificateKeyFile /etc/ssl/private/geonode.key
BrowserMatch "MSIE [2-6]" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0
# MSIE 7 and newer should be able to use keepalive
BrowserMatch "MSIE [17-9]" ssl-unclean-shutdown
</VirtualHost>
</IfModule>
<VirtualHost *:80>
Redirect permanent / https://192.168.10.10/
</VirtualHost>
This tells Apache where to fine the key and certificate. There are also some additional lines to handle MSIE, taken from Apache’s default-ssl file.
Tomcat Configuration¶
As root edit the Tomcat server config file /var/lib/tomcat6/conf/server.xml
, and replace:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
URIEncoding="UTF-8"
redirectPort="8443"
/>
with:
<Connector port="8080" protocol="HTTP/1.1"
connectionTimeout="20000"
URIEncoding="UTF-8"
scheme="https"
proxyName="<yourServersIPorDomainName>"
proxyPort="443"
/>
This tells Tomcat that it is running behind an https proxy. If this is omitted Tomcat will try to redirect to http.
GeoNode Configuration¶
As root edit the geonode config file /etc/geonode/local_settings.py
and change the SITEURL
protocol to https:
SITEURL = 'https://<ipaddressOrDomainName>/'
GeoServer Configuration¶
As root edit the file /var/lib/tomcat6/webapps/geoserver/WEB-INF/web.xml
and ensure the GEONODE_BASE_URL
is specified as follows:
<context-param>
<param-name>GEONODE_BASE_URL</param-name>
<param-value>https://localhost/</param-value>
</context-param>
Also update proxyBaseUrl
in the Geoserver global settings file /var/lib/geoserver/geonode-data/global.xml
:
<proxyBaseUrl>https://192.168.10.10/geoserver/</proxyBaseUrl>
Restart¶
Finally restart Apache and Tomcat with:
sudo /etc/init.d/apache2 restart
sudo /etc/init.d/tomcat6 restart
This information was complied from a number of sources. The main links are listed below. Please contact the GeoNode list with any updates or corrections.
- http://confluence.atlassian.com/display/JIRA/Connecting+to+SSL+services
- http://confluence.atlassian.com/display/JIRA/Integrating+JIRA+with+Apache+using+SSL
- http://www.akadia.com/services/ssh_test_certificate.html
- https://help.ubuntu.com/10.04/serverguide/C/httpd.html
- https://help.ubuntu.com/10.04/serverguide/C/certificates-and-security.html
GeoSites: GeoNode Multi-Tenancy¶
GeoSites is a way to run multiple websites with a single instance of GeoNode. Each GeoSite can have different templates, apps, and data permissions but share a single database (useful for sharing users and data layers), GeoServer, and CSW. This is useful when multiple websites are desired to support different sets of users, but with a similar set of data and overall look and feel of the sites. Users can be given permission to access multiple sites if needed, which also allows administrative groups can be set up to support all sites with one account.
Master Website¶
A GeoSites installation uses a ‘master’ GeoNode website that has soms additional administrative pages for doing data management. Layers, Maps, Documents, Users, and Groups can all be added and removed from different sites. Users can be given access to any number of sites, and data may appear on only a single site, or all of them. Additionally, if desired, any or all of the django apps installed on the other sites can be added to the master site to provide a single administrative interface that gives full access to all apps. The master site need not be accessible from the outside so that it can be used as an internal tool to the organization.
Users created on a particular site are created with access to just that site. Data uploaded to a particular site is given permission on that site as well as the master site. Any further adjustments to site-based permissions must be done from the master site.
Database¶
The master site, and all of the individual GeoSites, share a single database. Objects, including users, groups, and data layers, all appear within the database but an additional sites table indicates which objects have access to which sites. The geospatial data served by GeoServer (e.g., from PostGIS) can exist in the database like normal, since GeoServer will authenticate against GeoNode, which will use it’s database to determine permissions based on the object, current user, and site.
GeoServer¶
A single GeoServer instance is used to serve data to all of the GeoSites. To keep data organized each site specifies a default workspace (DEFAULT_WORKSPACE) that GeoServer will use to partition the data depending on which site uploaded the data. The workspaces themselves don’t have any impact on permissions, since data can be added and removed from different sites, however it provides at least some organization of the data based on the initial site.
Data that is common to all sites can be added to the master site which will appear in the generic ‘geonode’ workspace.
Settings Files and Templates¶
A key component in managing multiple sites is keeping data organized and using a structured series of settings files so that common settings can be shared and only site specific settings are separated out. It is also best to import the default GeoNode settings from the GeoNode installation. This prevents the settings from having to be manually upgraded if there is any default change the GeoNode settings.
Settings which are common to all GeoSites, but differ from the default GeoNode, are separated into a master_settings.py file. Then, each individual site has settings file which imports from the master site and will then only need to specify a small selection that make that site unique, such as:
- SITE_ID: Each one is unique, the master site should have a SITE_ID of 1.
- SITENAME
- SITEURL
- ROOT_URLCONF: This may be optional. The master site url.conf can be configured to automatically import the urls.py of all SITE_APPS, so a different ROOT_URLCONF is only needed if there are further differences.
- SITE_APPS: Containing the site specific apps
- App settings: Any further settings required for the above sites
- Other site specific settings, such as REGISTRATION_OPEN
A GeoSite therefore has three layers of imports, which is used for settings as well as the search path for templates. First it uses the individual site files, then the master GeoSite, then default GeoNode. These are specified via variables defined in settings:
- SITE_ROOT: The directory where the site specific settings and files are located (templates, static)
- PROJECT_ROOT: The top-level directory of all the GeoSites which should include the global settings file as well as template and static files
- GEONODE_ROOT: The GeoNode directory.
The TEMPLATE_DIRS, and STATICFILES_DIRS will then include all three directories as shown:
TEMPLATE_DIRS = (
os.path.join(SITE_ROOT, 'templates/'),
os.path.join(PROJECT_ROOT,'templates/'), # files common to all sites
os.path.join(GEONODE_ROOT, 'templates/')
)
STATICFILES_DIRS = (
os.path.join(SITE_ROOT, 'static/'),
os.path.join(PROJECT_ROOT, 'static/'),
os.path.join(GEONODE_ROOT, 'static/')
)
At the end of the settings_global.py the following variables will be set based on site specific settings:
STATIC_URL = os.path.join(SITEURL,’static/’)
GEONODE_CLIENT_LOCATION = os.path.join(STATIC_URL,’geonode/’)
GEOSERVER_BASE_URL = SITEURL + ‘geoserver/’
if SITE_APPS:
INSTALLED_APPS += SITE_APPS
Templates and Static Files¶
As mentioned above for each website there will be three directories used for template and static files. The first template file found will be the one used so templates in the SITE_ROOT/templates directory will override those in PROJECT_ROOT/templates, which will override those in GEONODE_ROOT/templates.
Static files work differently because (at least on a production server) they are collected and stored in a single location. Because of this care must be taken to avoid clobbering of files between sites, so each site directory should contain all static files in a subdirectory with the name of the site (e.g., static/siteA/logo.png )
The location of the proper static directory can then be found in the templates syntax such as:
{{ STATIC_URL }}{{ SITENAME|lower }}/logo.png
Permissions by Site¶
By default GeoNode is publicly available. In the case of GeoSites, new data will be publicly available, but only for the site it was added to, and the master site (all data is added to the master site).
Adding New Sites¶
A management command exists to easily create a new site. This will create all the needed directories, as well as a site specific settings file. The command may also create a website configuration file.
GeoNode Customization and Source Code Revision Control¶
This tutorial collects in a single place the steps for the customization of GeoNode (how to create a custom branch of the project and modify only the parts of intereset) and how to save back the work on a Source Code Revision Control system, like GitHub, and manage the team development process.
GeoNode Customization Setup Steps¶
Warning
These instructions are only valid if you’ve installed GeoNode following the guide at Setup & Configure HTTPD !!
If you are working remotely, you should first connect to the machine that has your GeoNode installation. You will need to perform the following steps in a directory where you intend to keep your newly created project.
1 2 3 4 5 6 7 8 | $ sudo su
$ cd /home/geonode
$ disable_local_repo.sh
$ apt-get install python-django
$ django-admin startproject geonode_custom --template=https://github.com/GeoNode/geonode-project/archive/master.zip -epy,rst
$ chown -Rf geonode: geonode_custom
$ exit
$ sudo pip install -e geonode_custom
|
Note
You should NOT use the name geonode for your project as it will conflict with your default geonode package name.
These commands create a new template based on the geonode example project.
Make sure that the directories are reachable and have the correct rights for the users geonode and www-data:
1 2 | $ sudo chown -Rf geonode: *
$ sudo chmod -Rf 775 geonode_custom
|
If you have a brand new installation of GeoNode, rename the /home/geonode/geonode/local_settings.py.sample to local_settings.py and edit it’s content by setting the SITEURL and SITENAME. This file will be your main settings file for your project. It inherits all the settings from the original one plus you can override the ones that you need.
Note
You can also decide to copy the /home/geonode/geonode/local_settings.py.sample to /path/to/geonode_custom/geonode_custom/local_settings.py in order to keep all the custom settings confined into the new project.
Warning
In order for the edits to the local_settings.py file to take effect, you have to restart apache.
Edit the file /etc/apache2/sites-available/geonode.conf and change the following directive from:
WSGIScriptAlias / /home/geonode/geonode/wsgi/geonode.wsgi
to:
WSGIScriptAlias / /home/geonode/geonode_custom/geonode_custom/wsgi.py
1 2 3 4 5 6 7 8 9 | $ sudo vi /etc/apache2/sites-available/geonode.conf
WSGIScriptAlias / /home/geonode/geonode_custom/geonode_custom/wsgi.py
...
<Directory "/home/geonode/geonode_custom/geonode_custom/">
...
|
Edit the file /etc/apache2/sites-available/geonode.conf and modify the DocumentRoot as follows:
Note
It’s a good practice to make copies and backups of the configuration files before modifying or updating them in order to revert the configuration at the previous state if something goes wrong.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 | <VirtualHost *:80>
ServerName http://localhost
ServerAdmin webmaster@localhost
DocumentRoot /home/geonode/geonode_custom/geonode_custom
ErrorLog /var/log/apache2/error.log
LogLevel warn
CustomLog /var/log/apache2/access.log combined
WSGIProcessGroup geonode
WSGIPassAuthorization On
WSGIScriptAlias / /home/geonode/geonode_custom/geonode_custom/wsgi.py
<Directory "/home/geonode/geonode_custom/geonode_custom/">
<Files wsgi.py>
Order deny,allow
Allow from all
Require all granted
</Files>
Order allow,deny
Options Indexes FollowSymLinks
Allow from all
IndexOptions FancyIndexing
</Directory>
...
|
Then regenerate the static JavaScript and CSS files from /path/to/geonode_custom/ and restart apache
1 2 3 4 5 6 | $ cd /home/geonode/geonode_custom
$ python manage.py collectstatic
$ python manage.py syncdb
$ /home/geonode/geonode
$ sudo pip install -e .
$ sudo service apache2 restart
|
Source code revision control¶
It is recommended that you immediately put your new project under source code revision control. The GeoNode development team uses Git and GitHub and recommends that you do the same. If you do not already have a GitHub account, you can easily set one up. A full review of Git and distributed source code revision control systems is beyond the scope of this tutorial, but you may find the Git Book useful if you are not already familiar with these concepts.
Create a new repository in GitHub. You should use the GitHub user interface to create a new repository for your new project.
Creating a new GitHub Repository From GitHub’s Homepage
Specifying new GitHub Repository Parameters
Your new Empty GitHub Repository
Initialize your own repository in the geonode_custom folder:
1
$ sudo git init
Add the remote repository reference to your local git configuration:
1 2 3
$ sudo git remote add origin <https url of your custom repo> https://github.com/geosolutions-it/geonode_custom.git
Add your project files to the repository:
1
$ sudo git add .
Commit your changes:
1 2 3 4 5
# Those two command must be issued ONLY once $ sudo git config --global user.email "geo@geo-solutions.it" $ sudo git config --global user.name "GeoNode Training" $ sudo git commit -am "Initial commit"
Push to the remote repository:
1
$ sudo git push origin master
A Typical GitHub Project Structure¶
Warning
This section is freely adapted from the official GitHub guides.
A great way to get involved in open source is to contribute to the existing projects you’re using.
The Community¶
Projects often have a community around them, made up of other users in different (formal or informal) roles:
- Owner is the user or organization that created the project has the project on their account.
- Maintainers and Collaborators are the users primarily doing the work on a project and driving the direction. Oftentimes the owner and the maintainer are the same. They have write access to the repository.
- Contributors is everyone who has had a pull request merged into a project.
- Community Members are the users who often use and care deeply about the project and are active in discussions for features and pull requests.
Readme¶
Nearly all GitHub projects include a README.md file. The readme provides a lay of the land for a project with details on how to use, build and sometimes contribute to a project.
License¶
A LICENSE file, well, is the license for the project. An open source project’s license informs users what they can and can’t do (e.g., use, modify, redistribute), and contributors, what they are allowing others to do.
Documentation and Wikis¶
Many larger projects go beyond a readme to give instructions for how people can use their project. In such cases you’ll often find a link to another file or a folder named docs in the repository.

Alternatively, the repository may instead use the GitHub wiki to break down documentation.

Issues¶
Issues are a great way to keep track of tasks, enhancements, and bugs for your projects. They’re kind of like email—except they can be shared and discussed with the rest of your team. Most software projects have a bug tracker of some kind. GitHub’s tracker is called Issues, and has its own section in every repository.

For more information on how Issues work, see the section “Work With GitHub Issues and Pull Requests”
Pull Requests¶
If you’re able to patch the bug or add the feature yourself, make a pull request with the code. Be sure you’ve read any documents on contributing, understand the license and have signed a CLA if required.
Once you’ve submitted a pull request the maintainer(s) can compare your branch to the existing one and decide whether or not to incorporate (pull in) your changes.
For more information on how Pull Requests work, see the section “Work With GitHub Issues and Pull Requests”
Work With GitHub Issues and Pull Requests¶
Warning
This section is freely adapted from the official GitHub guides.
Issues¶
An Issue is a note on a repository about something that needs attention. It could be a bug, a feature request, a question or lots of other things. On GitHub you can label, search and assign Issues, making managing an active project easier.
For example, let’s take a look at Bootstrap’s Issues section:

GitHub’s issue tracking is special because of our focus on collaboration, references, and excellent text formatting. A typical issue on GitHub looks a bit like this:

- A title and description describe what the issue is all about.
- Color-coded labels help you categorize and filter your issues (just like labels in email).
- A milestone acts like a container for issues. This is useful for associating issues with specific features or project phases (e.g. Weekly Sprint 9/5-9/16 or Shipping 1.0).
- One assignee is responsible for working on the issue at any given time.
- Comments allow anyone with access to the repository to provide feedback.
Open an Issue¶
Click the Issues tab from the sidebar.
Click New Issue.
Give your Issue a title and description: Add a new Logo to GeoNode custom.
Click Submit new Issue when you’re done. Now this issue has a permanent home (URL) that you can reference even after it is closed.
Issues Pro Tips¶
- Check existing issues for your issue. Duplicating an issue is slower for both parties so search through open and closed issues to see if what you’re running into has been addressed already.
- Be clear about what your problem is: what was the expected outcome, what happened instead? Detail how someone else can recreate the problem.
- Link to demos recreating the problem on things like JSFiddle or CodePen.
- Include system details like what the browser, library or operating system you’re using and its version.
- Paste error output or logs in your issue or in a Gist. If pasting them in the issue, wrap it in three backticks:
```
so that it renders nicely.
Branching¶
Branching is the way to work on different parts of a repository at one time.
When you create a repository, by default it has one branch with the name master
. You could keep working on this branch and have only one, that’s fine. But if you have another feature or idea you want to work on, you can create another branch, starting from master
, so that you can leave master
in its working state.
When you create a branch, you’re making a copy of the original branch as it was at that point in time (like a photo snapshot). If the original branch changes while you’re working on your new branch, no worries, you can always pull in those updates.

At GeoNode developers use branches for keeping bug fixes and feature work separate from master
(production) branch. When a feature or fix is ready, the branch is merged into master through a Pull Request.
To create a new branch¶
Go to the project folder and create a new branch
$ cd /home/geonode/geonode_custom/ $ sudo git branch add_logo $ sudo git checkout add_logo
Check that you are working on the correct branch:
add_logo
.$ cd /home/geonode/geonode_custom/ $ git branch
Push the new branch to GitHub.
$ cd /home/geonode/geonode_custom/ $ sudo git push origin add_logo
Make a commit¶
On GitHub, saved changes are called commits.
Each commit has an associated commit message, which is a description explaining why a particular change was made. Thanks to these messages, you and others can read through commits and understand what you’ve done and why.
Add a new logo to your custom GeoNode as described in the section Theming your GeoNode project
Stash the new files into the working project using
git add
$ cd /home/geonode/geonode_custom/ $ sudo git add geonode_custom/static $ git status
Commit the changes providing a commit messages and push them into your branch :
add_logo
.$ cd /home/geonode/geonode_custom/ $ sudo git commit -m "Adding a new logo to the custom GeoNode" $ sudo git push origin add_logo
Pull Requests¶
Pull Requests are the heart of collaboration on GitHub. When you make a pull request, you’re proposing your changes and requesting that someone pull in your contribution - aka merge them into their branch. GitHub’s Pull Request feature allows you to compare the content on two branches. The changes, additions and subtractions, are shown in green and red and called diffs (differences).
As soon as you make a change, you can open a Pull Request. People use Pull Requests to start a discussion about commits (code review) even before the code is finished. This way you can get feedback as you go or help when you’re stuck.
By using GitHub’s @mention system in your Pull Request message, you can ask for feedback from specific people or teams.
Create a Pull Request for changes to the Logo¶
Click the Pull Request icon on the sidebar, then from the Pull Request page, click the green New pull request button.
Select the branch you made,
add_logo
, to compare withmaster
(the original).Look over your changes in the diffs on the Compare page, make sure they’re what you want to submit.
When you’re satisfied that these are the changes you want to submit, click the big green Create Pull Request button.
Give your pull request a title and since it relates directly to an open issue, include “fixes #” and the issue number in the title. Write a brief description of your changes.
When you’re done with your message, click Create pull request!
Merge your Pull Request¶
It’s time to bring your changes together – merge your add_logo
branch into the master
(the original) branch.
Click the green button to merge the changes into master. Click Confirm merge. Go ahead and delete the branch, since its changes have been incorporated, with the Delete branch button in the purple box.
If you revisit the issue you opened, it’s now closed! Because you included “fixes #1” in your Pull Request title, GitHub took care of closing that issue when the Pull Request was merged!
Customize the Look & Feel¶
Now you can edit the templates in geonode_custom/templates, the css and images to match your needs and save the changes back to the Source Revision Control.
Theming your GeoNode project¶
There are a range of options available to you if you want to change the default look and feel of your GeoNode project. Since GeoNode’s style is based on Bootstrap you will be able to make use of all that Bootstrap has to offer in terms of theme customization. You should consult Bootstrap’s documentation as your primary guide once you are familiar with how GeoNode implements Bootstrap and how you can override GeoNode’s theme and templates in your own project.
Logos and graphics¶
GeoNode intentionally does not include a large number of graphics files in its interface. This keeps page loading time to a minimum and makes for a more responsive interface. That said, you are free to customize your GeoNode’s interface by simply changing the default logo, or by adding your own images and graphics to deliver a GeoNode experience the way you envision int.
Your GeoNode project has a directory already set up for storing your own images at <geonode_custom>/static/img
. You should place any image files that you intend to use for your project in this directory.
Let’s walk through an example of the steps necessary to change the default logo.
Change to the
img
directory:$ cd /home/geonode/geonode_custom/geonode_custom/static/img
If you haven’t already, obtain your logo image. The URL below is just an example, so you will need to change this URL to match the location of your file or copy it to this location:
$ sudo wget http://www2.sta.uwi.edu/~anikov/UWI-logo.JPG $ sudo chown -Rf geonode: .
Change to the
css
directory:$ cd /home/geonode/geonode_custom/geonode_custom/static/css
Override the CSS that displays the logo by editing
<geonode_custom>/static/css/site_base.css
with your favorite editor and adding the following lines, making sure to update the width, height, and URL to match the specifications of your image.$ sudo vi site_base.css
.navbar-brand { width: 373px; height: 79px; background: transparent url("img/UWI-logo.JPG") no-repeat scroll 15px 0px; }
Restart your GeoNode project and look at the page in your browser:
$ cd /home/geonode $ sudo rm -Rf geonode/geonode/static_root/* $ cd geonode_custom $ python manage.py collectstatic $ sudo service apache2 restart
Note
It is a good practice to cleanup the static_folder and the Browser Cache before reloading in order to be sure that the changes have been correctly taken and displayed on the screen.
Visit your site at http://localhost/ or the remote URL for your site.

Custom logo
You can see that the header has been expanded to fit your graphic. In the following sections you will learn how to customize this header to make it look and function the way you want.
Note
You should commit these changes to your repository as you progress through this section, and get in the habit of committing early and often so that you and others can track your project on GitHub. Making many atomic commits and staying in sync with a remote repository makes it easier to collaborate with others on your project.
Cascading Style Sheets¶
In the last section you already learned how to override GeoNode’s default CSS rules to include your own logo. You are able to customize any aspect of GeoNode’s appearance this way. In the last screenshot, you saw that the main area in the homepage is covered up by the expanded header.
First, we’ll walk through the steps necessary to displace it downward so it is no longer hidden, then change the background color of the header to match the color in our logo graphic.
Reopen
<geonode_custom>/static/css/site_base.css
in your editor and add the following rule after the one added in the previous step:$ cd /home/geonode/geonode_custom/geonode_custom/static/css $ sudo vi site_base.css
#wrap { margin: 75px 75px; }
Add a rule to change the background color of the header to match the logo graphic we used:
.navbar-inverse { background: #0e60c3; }
Your project CSS file should now look like this:
.navbar-brand { width: 373px; height: 79px; background: url(img/UWI-logo.JPG) no-repeat; } #wrap { margin: 75px 75px; } .navbar-inverse { background: #0e60c3; }
Restart the development server and reload the page:
$ python manage.py collectstatic $ sudo service apache2 restart
CSS overrides
Note
You can continue adding rules to this file to override the styles that are in the GeoNode base CSS file which is built from base.less. You may find it helpful to use your browser’s development tools to inspect elements of your site that you want to override to determine which rules are already applied. See the screenshot below. Another section of this workshop covers this topic in much more detail.

Screenshot of using Chrome’s debugger to inspect the CSS overrides
Templates and static pages¶
Now that we have changed the default logo and adjusted our main content area to fit the expanded header, the next step is to update the content of the homepage itself. Your GeoNode project includes two basic templates that you will use to change the content of your pages.
The file site_base.html
(in <geonode_custom>/templates/
) is the basic template that all other templates inherit from and you will use it to update things like the header, navbar, site-wide announcement, footer, and also to include your own JavaScript or other static content included in every page in your site. It’s worth taking a look at GeoNode’s base file on GitHub. You have several blocks available to you to for overriding, but since we will be revisiting this file in future sections of this workshop, let’s just look at it for now and leave it unmodified.
Open <geonode_custom>/templates/site_base.html
in your editor:
$ cd /home/geonode/geonode_custom/geonode_custom/templates $ sudo vi site_base.html .. code-block:: html {% extends "base.html" %} {% block extra_head %} <link href="{{ STATIC_URL }}css/site_base.css" rel="stylesheet"/> {% endblock %}
You will see that it extends from base.html
, which is the GeoNode template referenced above and it currently only overrides the extra_head
block to include our project’s site_base.css
which we modified in the previous section. You can see on line 22 of the GeoNode base.html template that this block is included in an empty state and is set up specifically for you to include extra CSS files as your project is already set up to do.
Now that we have looked at site_base.html
, let’s actually override a different template.
The file site_index.html
is the template used to define your GeoNode project’s homepage. It extends GeoNode’s default index.html
template and gives you the option to override specific areas of the homepage like the hero area, but also allows you leave area like the “Latest Layers” and “Maps” and the “Contribute” section as they are. You are of course free to override these sections if you choose and this section shows you the steps necessary to do that below.
Open
<geonode_custom>/templates/site_index.html
in your editor.Edit the
<h1>
element on line 9 to say something other than “Welcome”:<h1>{% trans "UWI GeoNode" %}</h1>
Edit the introductory paragraph to include something specific about your GeoNode project:
<p> {% blocktrans %} UWI's GeoNode is setup for students and faculty to collaboratively create and share maps for their class projects. It is maintained by the UWI Geographical Society. {% endblocktrans %} </p>
Change the Getting Started link to point to another website:
<span> For more information about the UWI Geographical society, <a href="http://uwigsmona.weebly.com/">visit our website</a> </span>
Add a graphic to the hero area above the paragraph replaced in step 3:
<img src = 'http://uwigsmona.weebly.com/uploads/1/3/2/4/13241997/1345164334.png'>
Your edited
site_index.html
file should now look like this:{% extends 'index.html' %} {% load i18n %} {% comment %} This is where you can override the hero area block. You can simply modify the content below or replace it wholesale to meet your own needs. {% endcomment %} {% block hero %} <div class="jumbotron"> <div class="container"> <h1>{% trans "UWI GeoNode" %}</h1> <div class="hero-unit-content"/> <div class="intro"> <img src = 'http://uwigsmona.weebly.com/uploads/1/3/2/4/13241997/1345164334.png'> </div> <p> {% blocktrans %} UWI's GeoNode is setup for students and faculty to collaboratively create and share maps for their class projects. It is maintained by the UWI Geographical Society. {% endblocktrans %} </p> <span> For more information about the UWI Geographical society, <a href="http://uwigsmona.weebly.com/">visit our website</a> </span> </div> </div> {% endblock %}
Refresh your GeoNode project and view the changes in your browser at http://localhost/ or the remote URL for your site:
$ python manage.py collectstatic $ sudo service apache2 restart
From here you can continue to customize your site_index.html
template to suit your needs. This workshop will also cover how you can add new pages to your GeoNode project site.
Other theming options¶
You are able to change any specific piece of your GeoNode project’s style by adding CSS rules to site_base.css
, but since GeoNode is based on Bootstrap, there are many pre-defined themes that you can simply drop into your project to get a whole new look. This is very similar to WordPress themes and is a powerful and easy way to change the look of your site without much effort.
Bootswatch¶
Bootswatch is a site where you can download ready-to-use themes for your GeoNode project site. The following steps will show you how to use a theme from Bootswatch in your own GeoNode site.
Visit http://bootswatch.com and select a theme (we will use Sandstone for this example). Select the download bootstrap.css option in the menu:
Put this file into
<geonode_custom>/static/css
.$ cd /home/geonode/geonode_custom/geonode_custom/static/css
Update the
site_base.html
template to include this file. It should now look like this:$ cd /home/geonode/geonode_custom/geonode_custom/templates $ sudo vi site_base.html
{% extends "base.html" %} {% block extra_head %} <link href="{{ STATIC_URL }}css/site_base.css" rel="stylesheet"/> <link href="{{ STATIC_URL }}css/bootstrap.css" rel="stylesheet"/> {% endblock %}
Refresh the development server and visit your site:
$ python manage.py collectstatic $ sudo service apache2 restart
Your GeoNode project site is now using the Sandstone theme in addition to the changes you have made.
Final Steps¶
When you’ve done the changes, run the following command in the geonode_custom folder:
1 2 | $ cd /home/geonode/geonode_custom
$ python manage.py collectstatic
|
And now you should see all the changes you’ve made to your GeoNode.
Migrate Data Between GeoNode Instances¶
This workshop shows how to migrate GeoNode Layers (along with GeoServer associated datasets and styles) from an instance to another.
The whole tutorial is divided in different parts, each one showing a different methodology to perform the data migration:
- Manual migration of data between to GeoNode instances with same version
- Semi-automatic migration of data between GeoNode instances with same version
- Automatic migration of data between GeoNode instances with same version
- Automatic migration of data between GeoNode instances with different version
Warning
Currently the points 2,3,4 are not yet feasable with the actual GeoNode version. Those section will be available as soon as the “GeoNode Backup & Restore” GNIP development will be ready and merged to the master
branch.
Manual migration of data between GeoNode instances with same version¶
This section shows how to export a Layer from a GeoNode instance and import it back to another one with the same versions.
Before going through the commands and the operations to perform the export/import tasks, the tutorial will explain in details the structure of a GeoNode Layer.
As you may already know, the physical geospatial data, along with its graphical stylesheets (also known as SLDs), are backed by GeoServer. Each GeoNode version is shipped with a related GeoServer version and GeoServer Data Dir (we will see in details later what does it means). For the moment the important thing to know is that GeoNode cannot live without a running instance of GeoServer. Therefore migrating data from a GeoNode instance to another one, means also move geospatial data and stylesheets between the related GeoServer instances.
A GeoNode Layer Structure¶
Lets start anlysing deeply a GeoNode Layer structure.
Preparation Of The Webinar¶
This tutorial is generic and can be executed using any existing Layer of GeoNode, however in order to follow exactly the same passages, please execute this first simple exercise.
Warning
As a prerequisite, you must have access to a GeoNode instance with Admin rights.
Exercise
Add A Sample Layer To GeoNode
Log into GeoNode as
Administrator
Click on the
Add Layers
button from the home page, in order to switch to the upload pageGeoNode Add Layers Button
Click on the
Browse
on the upload pageGeoNode Upload Browse Button
Select from the folder
gisdata/data/good/vector
the 4 files- san_andres_y_providencia_coastline.dbf
- san_andres_y_providencia_coastline.prj
- san_andres_y_providencia_coastline.shp
- san_andres_y_providencia_coastline.shx
GeoNode Upload Browse Button
Click on the
Upload Files
button and make sure the operation completes successfullyGeoNode Upload Files Button
Layer Metadata¶
Each resource in GeoNode has metadata. Metadada in GeoNode is quite important, it is used by the application to describe, search and identify a resource. As an instance part the title, abstract, regions or keywords of a resource are all part of the metadata.
Metadata in GeoNode is stored on the backend databse as a set of fields associated to the resource. The catalogue service then, makes use of such information to dynamically generate ISO compliant XML records. Those records can be used by external catalogiung applications, compliant with the standards supported by GeoNode, in order to automatically recognize and indexing the available resources on the server.
The GeoNode Layer info page makes use of some metadata fields to provide immediate description of the Layer.
As shown in the figure below, the GeoNode Info
tab contains a table reporting basic information about the Layer, like the Title, the Abstract, the Category and others.
![]()
GeoNode Layer Info
Warning
The Title shown on the Layer Metadata is not the real Layer name. We will deepen the topic on the following sections of this tutorial.
In order to obtain the whole Layer Metadata in one standard format (usually not quite human-friendly; a huge and long XML), it is possible to click on the Download Metadada
button on the right panel.
GeoNode will present to the user a modal window with a list of permalinks to the dynamic XML supported formats.
![]()
GeoNode Layer Download Metadada
As an instance, if you click on ISO
, you will get an XML containing all the Layer metadata fields in ISOTC211/19115
format.
![]()
GeoNode Layer Download Metadada ISOTC211/19115 Format
Excercise
Edit The Layer Metadada
Go to the
GeoNode Layer List
GeoNode Layer List
Click on a
Layer
in order to go to the resource info pageGeoNode Layer Edit
Click on the
Edit Layer
buttonGeoNode Layer Download Metadada Button
Click on the
Edit
button under theMetadada
incon of the modal windowGeoNode Metadada Edit
Update the at least the
Title
, theAbstract
and theCategory
and finally click on theUpdate
buttonGeoNode Metadada Edit
The export of metadata is a foundamental task to achieve when moving a resource from a GeoNode instance to another.
Excercise
Export the Layer Metadata as ISOTC211/19115 and save it to an XML files on the local storage
Go to the
GeoNode Layer List
GeoNode Layer List
Click on a
Layer
in order to go to the resource info pageGeoNode Layer Edit
Click on the
Download Metadada
buttonGeoNode Layer Download Metadada Button
From the modal window, click with the right mouse button over the
ISO
linkGeoNode Layer Download Metadada ISOTC211/19115 Format
From the context menù, select the voice Save Link As
GeoNode Layer Download Metadada ISOTC211/19115 Save Link As
Store the xml into the hard disk and note the location for later use
GeoNode Layer Download Metadada ISOTC211/19115 XML
Layer Styles¶
Each Layer in GeoNode has a representation style associated, or a Legend
if you want.
A style is basically a set of rules instructing the geospatial server on how to create a portrayal of the original data. The figure shown in the map is only one of the inifite possible representation of the data stored on the server.
It is worth to point out that viewing the data is substantially different from getting the data. A portrayal of data provides to the users an immediate understanding of the meaning (or at least of one possible meaning), but this is not suitable for data analysis or more sophisticated computational tasks.
The legend (or style), depends exclusvely from the geometry of the layer and, optionally, from a subset of its atributes.
On GeoNode, if you move to a Layer info page, you can notice a small Legend
panel on the right representing the style currently in use.
![]()
GeoNode Layer Download Metadada ISOTC211/19115 XML
A Layer can have a lot of different styles associated, of course. Usually there is a Default Style
which is the one presented to the users if not differently specified.
It is possible from the GeoNode interface to manage the styles associated to a Layer and also change its Default Style
.
Note
Only owners or users with write permissions on the Layer can update the styles.
Excercise
Layer Styles Management Panel
Go to the
GeoNode Layer List
GeoNode Layer List
Click on a
Layer
in order to go to the resource info pageGeoNode Layer Edit
Click on the
Edit Layer
button on the right panelGeoNode Layer Edit Button
Click on the
Manage
button under theStyles
incon of the modal windowGeoNode Layer Styles Manage
Play with the styles comboboxex in order to change the
Default Style
or add/remove more of them without updating the LayerGeoNode Layer Styles Management Panel
Warning
Do not click on Update Available Styles
button, otherwise you will change the current Layer styles.
GeoNode also provides a simple tool for the editing of the Layer style directly from the web interface.
Note
It is worth noting that the GeoNode style editor is very simple and does not allow advanced style editing. Also this tool may not work perfectly with complex layers. Further in the tutorial we will see how it is possible to edit directly the style using the SLD
native format.
Excercise
Update the default style through the GeoNode Style Editor tool
Go to the
GeoNode Layer List
GeoNode Layer List
Click on a
Layer
in order to go to the resource info pageGeoNode Layer Edit
Click on the
Edit Layer
button on the right panelGeoNode Layer Edit Button
Click on the
Edit
button under theStyles
incon of the modal windowGeoNode Layer Styles Edit
You should see a small window similar to the one depicted below
GeoNode Layer Styles Editor
Select the first
Rule
and click on the smallEdit
button belowGeoNode Layer Styles Edit Rules
Modify the
Name
, theColor
and theWidth
of the stroke and click saveGeoNode Layer Styles Edit Stroke
The GeoNode style editor tool just simplifies the editing of a Layer style by providing a small graphic user interface to one of the GeoServer capabilities.
Under the hood a Layer style is a special XML format defined from the Open Geospatial Consortium (OGC) as Style Layer Descriptor
or SLD.
Advanced users can directly modify the SLD or use more advanced tools to create very complex and beautiful Layer styles.
In order to do that, you will need to update the SLD source directly through the GeoServer interface.
Excercise
Update the default style through the GeoServer interface
Log inot GeoNode as
Administrator
. Then click on the user button on the top right.GeoNode Admin
From the menu, click on the
GeoServer
voice.GeoNode Admin GeoServer
You will be redirected to the
GeoServer
admin interface.GeoServer Admin Gui
Select the
Styles
topic from the left menu.GeoServer Admin Styles
Select the layer name from the list and click on it. You will be redirected to the SLD editor page.
GeoServer Admin Style Editor
Modify the
Color
and theWidth
of theExternal Border
XML rule. Click onPreview legend
to see the changes, and when you are happySubmit
the SLD.GeoServer Admin Style Editor
Go back to GeoNode. Reload the Layer in order to see the changes.
GeoNode Updated Layer Style
GeoServer Web Interface Basics¶
This section will introduce the basic concepts of the web administration interface (generally abbreviated to “web admin” .)
Welcome Page
For most installations, GeoServer will start a web server on localhost at port 8080, accessible at the following URL:
http://localhost:8080/geoserver/web
Note
This URL is dependent on your installation of GeoServer. When using the WAR installation, for example, the URL will be dependent on your container setup. A GeoNode/GeoServer production environment, usually maps GeoServer un the same port 80 like the following
http://<geonode_host>/geoserver/web
When correctly configured, a welcome page will open in your browser.

Welcome Page
The welcome page contains links to various areas of the GeoServer configuration. The About GeoServer section in the Server menu provides external links to the GeoServer documentation, homepage, and bug tracker. The page also provides login access to the geoserver console. This security measure prevents unauthorized users from making changes to your GeoServer configuration. The default username and password is admin
and geoserver
. These can be changed only by editing the security/users.properties
file in the data_directory.

Login
Regardless of authorization access, the web admin menu links to the Demo and Layer Preview portion of the console. The webadmin_demos page contains links to various information pages, while the layerpreview page provides spatial data in various output formats.
Warning
On a GeoNode/GeoServer environment, you need to login on GeoNode first. The authorization is automatically taken by GeoServer through your browser cookies.
Once logged into GeoNode, going to http://<geonode_host>/geoserver/web
will automatically log into GeoServer with the same rights.
Note
On a GeoNode/GeoServer environment, GeoServer still allows FORM
and BASIC
authentication mechanism (like shown in this section). But this is useful only if you need to access to GeoServer from external applications or with another/specific GeoServer user. You must know the username and passowrd of the GeoServer user in this case.
When logged on, additional options will be presented.

Additional options when logged in
Geoserver Web Coverage Service (WCS), Web Feature Service (WFS), and Web Map Service (WMS) configuration specifications can be accessed from this welcome page as well.
List Pages
Some web admin pages show list views of configuration data type items available in the GeoServer instance. The page displays links to the items, and where applicable their parent items as well. To facilitate working with large sets of items, list views allow sorting and searching across all items in the data type.
In the example below, the GeoServer Layers page displays a list of layers along with links to their parent GeoServer Stores and GeoServer Workspaces.

Layers list page
Sorting
To sort a column alphabetically, click the column header.

Unsorted (left) and sorted (right) columns
Searching
Searching can be used to filter the number of items displayed. This is useful for working with data types that contain a large number of items.
To search data type items, enter the search string in the search box and click Enter. GeoServer will search the data type for items that match your query, and display a list view showing the search results.

Search results for the query “coast” on the Layers page
GeoServer Workspaces¶
This section describes how to view and configure workspaces. Analogous to a namespace, a workspace is a container which organizes other items. In GeoServer, a workspace is often used to group similar layers together. Layers may be referred to by their workspace name, colon, layer name (for example geonode:san_andres_y_providencia_coastline
). Two different layers can have the same name as long as they belong to different workspaces (for example sf:states
and topp:states
).
Note
On a GeoNode/GeoServer environment, by default there is only one workspace defined called geonode. You can still define other workspaces, but GeoNode will work only with Layers under the geonode workspace.

Workspaces page
Edit a Workspace
To view or edit a workspace, click the workspace name. A workspace configuration page will be displayed.

Workspace named “geonode”
A workspace is defined by a name and a Namespace URI (Uniform Resource Identifier). The workspace name is limited to ten characters and may not contain space. A URI is similar to a URL, except URIs do not need to point to a actual location on the web, and only need to be a unique identifier. For a Workspace URI, we recommend using a URL associated with your project, with perhaps a different trailing identifier. For example, http://www.geonode.org/
is the URI for the “geonode” workspace, http://www.openplans.org/topp
is the URI for the “topp” workspace.
Add a Workspace
The buttons for adding and removing a workspace can be found at the top of the Workspaces view page.

Buttons to add and remove
To add a workspace, select the Add new workspace button. You will be prompted to enter the the workspace name and URI.

New Workspace page with example
Remove a Workspace
To remove a workspace, select it by clicking the checkbox next to the workspace. Multiple workspaces can be selected, or all can be selected by clicking the checkbox in the header. Click the Remove selected workspaces(s) button. You will be asked to confirm or cancel the removal. Clicking OK removes the selected workspace(s).

Workspace removal confirmation
Warning
Removing a workspace will delete also all the layers associated with it.
Workspaces On The GeoServer Data Dir
All the configuration files of a workspace are stored into the GeoServer Data Dir. We will see later, on another section of the tutorial, how to access this folder.
It is worth noting that there is also a physical dependency between workspaces, stores and layers.
A GeoServer layer is always defined by its workspace and store. In the GeoServer Data Dir the layer definition is stored as a subdirectory of its store, the store definition is stored as a subdirectory of its workspace.
1 2 3 4 5 6 7 8 9 10 11 <data_directory>/ ... workspaces/ | +- workspace dirs... | +- datastore dirs... | +- layer dirs...
GeoServer Stores¶
A store connects to a data source that contains raster or vector data. A data source can be a file or group of files, a table in a database, a single raster file, or a directory (for example, a Vector Product Format library). The store construct allows connection parameters to be defined once, rather than for each dataset in a source. As such, it is necessary to register a store before configuring datasets within it.

Stores View
Store types
While there are many potential formats for data sources, there are only four kinds of stores. For raster data, a store can be a file. For vector data, a store can be a file, database, or server.
Type Icon | Description |
![]() |
raster data in a file |
![]() |
vector data in a file |
![]() |
vector data in a database |
![]() |
vector server (web feature server) |
Edit a Store
To view or edit a store, click the store name. A store configuration page will be displayed.
The exact contents of this page depend on the specific format of the store.
See the sections Working with Vector Data, Working with Raster Data, and Working with Databases for information about specific data formats.
The example shows the configuration for the nurc:ArcGridSample
store.

Editing a raster data store
Basic Store Info
The basic information is common for all formats.
- Workspace - the store is assigned to the selected workspace
- Data Source Name - the store name as listed on the view page
- Description - (optional) a description that displays in the administration interface
- Enabled - enables or disables access to the store, along with all datasets defined for it
Connection Parameters
The connection parameters vary depending on data format.
Add a Store
The buttons for adding and removing a store can be found at the top of the Stores page.

Buttons to add and remove a Store
To add a store, select the Add new Store button. You will be prompted to choose a data source. GeoServer natively supports many formats (with more available via extensions). Click the appropriate data source to continue.

Choosing the data source for a new store
The next page configures the store. Since connection parameters differ across data sources, the exact contents of this page depend on the store’s specific format. See the sections data_vector, data_raster, and data_database for information on specific data formats. The example below shows the ArcGrid raster configuration page.

Configuration page for an ArcGrid raster data source
Remove a Store
To remove a store, click the checkbox next to the store. Multiple stores can be selected, or all can be selected by clicking the checkbox in the header.

Stores selected for removal
Click the Remove selected Stores button. You will be asked to confirm the removal of the configuration for the store(s) and all resources defined under them. Clicking OK removes the selected store(s), and returns to the Stores page.

Confirm removal of stores
How GeoNode Automatically Configures Workspaces And Stores¶
GeoNode currently creates the Stores automatically on Layer upload.
Summarizing GeoNode does:
GeoNode uses always the Workspace geonode for each Layer
GeoNode configures automatically the Stores for the Layers
- For Raster Layers; GeoNode creates a Store of type GeoTIFF with the same name of the Layer
- For Vectorial Layers; GeoNode creates a Store of type ESRI Shapefile with the same name of the Layer only if not connected to a Database
- For Vectorial Layers; GeoNode uses always the same Store of type Postgis and convert the vectorial data into Database Tables
Excercise
GeoNode Workspace And Stores
Log inot GeoNode as
Administrator
. Then click on the user button on the top right.GeoNode Admin
From the menu, click on the
GeoServer
voice.GeoNode Admin GeoServer
You will be redirected to the
GeoServer
admin interface.GeoServer Admin Gui
Select the
Workspaces
topic from the left menu.GeoServer Workspace Menu
Select the
geonode
workspace from the list.GeoServer Workspace List
Edit the
geonode
workspace.GeoServer Workspace
geonode
Select the
Stores
topic from the left menu.GeoServer Store Menu
If GeoNode and GeoServer have been configured to use a Database backend, select the store
datastore
of type PostGIS.GeoServer Store List
GeoServer Store
datastore
Note
The Datastore is confiugured by using the same connection parameters specified in the GeoNode
local_settings.py
file.If GeoNode and GeoServer have been configured to use Shapefiles, select the store
san_andres_y_providencia_coastline
of type Shapefile.GeoServer Store List
GeoServer Store
san_andres_y_providencia_coastline
Note
The Store is confiugured by pointing directly to the path of the
san_andres_y_providencia_coastline.shp
file.
GeoServer Layers¶
In GeoServer, the term “layer” refers to a raster or vector dataset that represents a collection of geographic features. Vector layers are analogous to “featureTypes” and raster layers are analogous to “coverages”. All layers have a source of data, known as a Store. The layer is associated with the Workspace in which the Store is defined.
In the Layers section of the web interface, you can view and edit existing layers, add (register) a new layer, or remove (unregister) a layer. The Layers View page displays the list of layers, and the Store and Workspace in which each layer is contained. The View page also displays the layer’s status and native SRS.

Layers View
Layer types
Layers can be divided into two types of data: raster and vector. These two formats differ in how they store spatial information. Vector types store information about feature types as mathematical paths—a point as a single x,y coordinate, lines as a series of x,y coordinates, and polygons as a series of x,y coordinates that start and end on the same place. Raster format data is a cell-based representation of features on the earth surface. Each cell has a distinct value, and all cells with the same value represent a specific feature.
Field | Description |
---|---|
![]() |
raster (grid) |
![]() |
vector (feature) |
Add a Layer
At the upper left-hand corner of the layers view page there are two buttons for the adding and removal of layers. The green plus button allows you to add a new layer (referred to as resource). The red minus button allows you to remove selected layers.

Buttons to Add and Remove a Layer
Clicking the Add a new resource button brings up a New Layer Chooser panel. The menu displays all currently enabled stores. From this menu, select the Store where the layer should be added.

List of all currently enabled stores
Upon selection of a Store, a list is displayed of resources within the store.
Resources which have already been published as layers are listed first, followed by other resources which
are available to be published.
In this example, giant_polygon
, poi
, poly_landmarks
and tiger_roads
are all existing layers within the NYC store.

List of published and available resources in a store
To add a layer for an available resource click Publish.
To add a new layer for a published resource click Publish Again.
(Note that when re-publishing the name of the new layer may have to be modified to avoid conflict with an existing layer.)
The actions display an Edit Layer
page to enter the definition of the new layer.
Remove a Layer
To remove a layer, select it by clicking the checkbox next to the layer. As shown below, multiple layers can be selected for batch removal. Note that selections for removal will not persist from one results pages to the next.

Some layers selected for removal
All layers can be selected for removal by clicking the checkbox in the header.

All layers selected for removal
Once layer(s) are selected, the Remove selected resources link is activated. Once you’ve clicked the link, you will be asked to confirm or cancel the removal. Selecting OK removes the selected layer(s).
Edit Layer: Data
To view or edit a layer, click the layer name. A layer configuration page will be displayed. The Data tab, activated by default, allows you to define and change data parameters for a layer.

Edit Layer: Data tab
Basic Info
The beginning sections—Basic Resource Info, Keywords and Metadata link—are analogous to the Service Metadata
section for WCS, WFS, and WMS.
These sections provide “data about the data,” specifically textual information that make the layer data easier to understand and work with.
The metadata information will appear in the capabilities documents which refer to the layer.
Name—Identifier used to reference the layer in WMS requests. (Note that for a new layer for an already-published resource, the name must be changed to avoid conflict.)
Title—Human-readable description to briefly identify the layer to clients (required)
Abstract—Describes the layer in detail
Keywords—List of short words associated with the layer to assist catalog searching
Metadata Links—Allows linking to external documents that describe the data layer. Currently only two standard format types are valid: TC211 and FGDC. TC211 refers to the metadata structure established by the ISO Technical Committee for Geographic Information/Geomatics (ISO/TC 211) while FGDC refers to those set out by the Federal Geographic Data Committee (FGDC) of the United States.
Adding a metadata link in FGDC format
Coordinate Reference Systems
A coordinate reference system (CRS) defines how georeferenced spatial data relates to real locations on the Earth’s surface. CRSes are part of a more general model called Spatial Reference Systems (SRS), which includes referencing by coordinates and geographic identifiers. GeoServer needs to know the Coordinate Reference System of your data. This information is used for computing the latitude/longitude bounding box and reprojecting the data during both WMS and WFS requests.

Coordinate reference system of a layer
- Native SRS—Specifies the coordinate system the layer is stored in. Clicking the projection link displays a description of the SRS.
- Declared SRS—Specifies the coordinate system GeoServer publishes to clients
- SRS Handling—Determines how GeoServer should handle projection when the two SRSes differ
Bounding Boxes
The bounding box determines the extent of the data within a layer.
- Native Bounding Box—The bounds of the data specified in the Native SRS. These bounds can be generated by clicking the Compute from data button.
- Lat/Lon Bounding Box—The bounds specified in geographic coordinates. These bounds can be calculated by clicking the Compute from native bounds button.

Bounding Boxes of a layer
Feature Type Details (Vector)
Vector layers have a list of the Feature Type Details. These include the Property and Type of a data source. For example, the sf:archsites
layer shown below includes a geometry (the_geom
) of type “point”.

Feature Type Details
The Nillable option refers to whether the property requires a value or may be flagged as being null. Meanwhile Min/Max Occurrences refers to how many values a field is allowed to have. Currently both Nillable and Min/Max Occurrences are set to true
and 0/1
but may be extended with future work on complex features.
Edit Layer: Publishing
The Publishing tab configures HTTP and WMS/WFS/WCS settings.

Edit Layer: Publishing tab
- Enabled—A layer that is not enabled won’t be available to any kind of request, it will just show up in the configuration (and in REST config)
- Advertised—A layer is advertised by default. A non-advertised layer will be available in all data access requests (for example, WMS GetMap, WMS GetFeature) but won’t appear in any capabilities document or in the layer preview.
HTTP Settings
Cache parameters that apply to the HTTP response from client requests.
- Response Cache Headers— If selected, GeoServer will not request the same tile twice within the time specified in Cache Time. One hour measured in seconds (3600), is the default value for Cache Time.
WMS Settings
Sets the WMS specific publishing parameters.
![]()
WMS Settings
- Queryable—Controls whether the layer is queryable via WMS
GetFeatureInfo
requests. - Default style—Style that will be used when the client does not specify a named style in GetMap requests.
- Additional styles—Other styles that can be associated with this layer. Some clients (and the GeoServer Layer Preview) will present those as styling alternatives for that layer to the user.
- Default rendering buffer—Default value of the
buffer
GetMap/GetFeatureInfo vendor parameter. See the Wms Vendor Parameters for more details. - Default WMS path—Location of the layer in the WMS capabilities layer tree. Useful for building non-opaque layer groups
WMS Attribution
Sets publishing information about data providers.
![]()
WMS Attribution
- Attribution Text—Human-readable text describing the data provider. This might be used as the text for a hyperlink to the data provider’s web site.
- Attribution Link—URL to the data provider’s website.
- Logo URL—URL to an image that serves as a logo for the data provider.
- Logo Content Type, Width, and Height—These fields provide information about the logo image that clients may use to assist with layout. GeoServer will auto-detect these values if you click the Auto-detect image size and type link at the bottom of the section. The text, link, and URL are each advertised in the WMS Capabilities document if they are provided. Some WMS clients will display this information to advise users which providers provide a particular dataset. If you omit some of the fields, those that are provided will be published and those that are not will be omitted from the Capabilities document.
WFS Settings
Per-Request Feature Limit—Sets the maximum number of features for a layer a WFS GetFeature operation should generate (regardless of the actual number of query hits)
Maximum number of decimals—Sets the maximum number of decimals in GML output.
Note
It is also possible to override the
OtherSRS/OtherCRS
list configured in the WFS service, including overriding it with an empty list if need be. The input area will accept a comma separated list of EPSG codes:WFS otherSRS/otherCRS override
The list will be used only for the capabilities document generation, but will not be used to limit the actual target SRS usage in GetFeature requests.
How GeoNode Automatically Configures Layers¶
GeoNode automatically sets and updates information on GeoServer every time a Layer is created or its metadata updated.
Excercise
Verify Information On GeoServer Side
Log inot GeoNode as
Administrator
. Then click on the user button on the top right.GeoNode Admin
From the menu, click on the
GeoServer
voice.GeoNode Admin GeoServer
You will be redirected to the
GeoServer
admin interface.GeoServer Admin Gui
Select the
Layers
topic from the left menu.GeoServer Layers Menu
Select the Layer
san_andres_y_providencia_coastline
from the list.GeoServer Layers List
Notice how the Basic Resource Info have been automatically configured by GeoNode.
- The Name reflects the original file name.
- The Title has been filled with the Metadata value
- The Abstract has been filled with the Metadata value
- The Keywords have been added with the Metadata values
GeoServer Layers Basic Resource Info
Notice how the Metadata Links reflect the Metadada Download options available on GeoNode, along with the URLs.
GeoServer Layers Metadata Links
Notice how the Coordinate Reference System and the Bounding Boxes have been automatically filled by GeoNode accordingly with the data source info.
GeoServer Layers Coordinate Reference System
On the Publishing Section the Default Style has been updated and configured by GeoNode.
GeoServer Layers Publishing Section
GeoServer Stylesheets (SLD)¶
Styles render, or make available, geospatial data. Styles for GeoServer are written in Styled Layer Descriptor (SLD), a subset of XML. Please see the section on Styling for more information on working with styles.
On the Styles page, you can add a new style, view or edit an existing style, or remove a style.

Styles page
Edit a Style
To view or edit a style, click the style name. A Style Editor page will be diplayed. The page presents options for configuring a style’s name, code, and other attributes. Style names are specified at the top in the name field. The style’s workspace can be chosen using workspace selector. Styles are edited using a plain text editor with some basic utilities.

Style editor
The style editor supports line numbering, automatic indentation, and real-time syntax highlighting. You can also increase or decrease the font size of the editor.
Button | Description |
![]() |
undo |
![]() |
redo |
![]() |
go to line |
![]() |
auto-format the editor contents |
![]() |
change the font size of the editor |
To confirm that the SLD code is fully compliant with the SLD schema, click the Validate button. A message box will confirm whether the style contains validation errors.
Note
GeoServer will sometimes render styles that fail validation, but this is not recommended.

No validation errors

Validation error message
To view the generated legend entry for the style, click the Preview Legend button.
Add a Style
The buttons for adding and removing a style can be found at the top of the Styles page.

Adding or removing a style
To add a new style, select the Add a new style button. You will be redirected to an editor page. Enter a name for the style. You can also select the style format. In a default GeoServer installation only SLD is supported, but other extensions (such as css) add support for additional formats. The editor page provides several options for submitting a new style. You can paste the style directly into the editor contents. You can generate a new default style based on an internal template:

Generating a new default style.
You can copy the contents of an existing style into the editor:

Copying an existing Style from GeoServer
You can select and upload a local file that contains the SLD:

Uploading an SLD file from your local computer
Once a style is successfully submitted, you will be redirected to the main Styles page where the new style will be listed.
Remove a Style
To remove a style, select it by clicking the checkbox next to the style. Multiple styles can be selected, or all can be selected by clicking the checkbox in the header. Click the Remove selected style(s) link at the top of the page. You will be asked to confirm or cancel the removal. Clicking OK removes the selected style(s).

Confirmation prompt for removing styles
How GeoNode Automatically Configures Style¶
Whe uploading a new Layer to GeoNode, it creates by default an SLD on GeoServer with the same native name of the Layer and assigns it to the Layer ad Default Style.
In the previous sections you already modified the default style of the Layer san_andres_y_providencia_coastline
automatically generated by GeoNode.
GeoServer Data Dir Structure¶
This section gives an overview of the structure and contents of the GeoServer data directory.
This is not intended to be a complete reference to the GeoServer configuration information, since generally the data directory configuration files should not be accessed directly. Instead, the Web Administration Interface can be used to view and modify the configuration manually, and for programmatic access and manipulation the REST configuration API should be used.
The directories that do contain user-modifiable content are:
logs
, palettes
, templates
, user-projection
, and www
.
The following figure shows the structure of the GeoServer data directory:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 <data_directory>/ global.xml logging.xml wms.xml wfs.xml wcs.xml data/ demo/ geosearch/ gwc/ layergroups/ logs/ palettes/ plugIns/ security/ styles/ templates/ user_projections/ workspaces/ | +- workspace dirs... | +- datastore dirs... | +- layer dirs... www/
The .xml files
The top-level .xml
files contain information about the services and various global options for the server instance.
File | Description |
global.xml |
Contains settings common to all services, such as contact information, JAI settings, character sets and verbosity. |
logging.xml |
Specifies logging parameters, such as logging level, logfile location, and whether to log to stdout. |
wcs.xml |
Contains the service metadata and various settings for the WCS service. |
wfs.xml |
Contains the service metadata and various settings for the WFS service. |
wms.xml |
Contains the service metadata and various settings for the WMS service. |
workspaces
The workspaces
directory contain metadata about the layers published by GeoServer.
It contains a directory for each defined workspace.
Each workspace directory contains directories for the datastores defined in it.
Each datastore directory contains directories for the layers defined for the datastore.
Each layer directory contains a layer.xml
file, and
either a coverage.xml
or a featuretype.xml
file
depending on whether the layer represents a raster or vector dataset.
data
The data
directory can be used to store file-based geospatial datasets being served as layers.
(This should not be confused with the main “GeoServer data directory”.)
This directory is commonly used to store shapefiles and raster files,
but can be used for any data that is file-based.
The main benefit of storing data files under the data
directory is portability.
Consider a shapefile stored external to the data directory at a location C:\gis_data\foo.shp
.
The datastore
entry in catalog.xml
for this shapefile would look like the following:
1 2 3 4 5 <datastore id="foo_shapefile"> <connectionParams> <parameter name="url" value="file://C:/gis_data/foo.shp" /> </connectionParams> </datastore>
Now consider trying to port this data directory to another host running GeoServer.
The location C:\gis_data\foo.shp
probably does not exist on the second host.
So either the file must be copied to this location on the new host,
or catalog.xml
must be changed to reflect a new location.
This problem can be avoided by storing foo.shp
in the data
directory.
In this case the datastore
entry in catalog.xml
becomes:
1 2 3 4 5 <datastore id="foo_shapefile"> <connectionParams> <parameter name="url" value="file:data/foo.shp"/> </connectionParams> </datastore>
The value
attribute is rewritten to be relative to the data
directory.
This location independence allows the entire data directory to be copied to a new host
and used directly with no additional changes.
demo
The demo
directory contains files which define the sample requests available in the Sample Request Tool (http://localhost/geoserver/demoRequest.do).
See the Demos page for more information.
geosearch
The geosearch
directory contains information for regionation of KML files.
gwc
The gwc
directory holds the cache created by the embedded GeoWebCache service.
layergroups
The layergroups
directory contains configuration information for the defined layergroups.
logs
The logs
directory contains configuration information for logging profiles,
and the default geoserver.log
log file.
See also Advanced log configuration.
palettes
The palettes
directory is used to store pre-computed Image Palettes.
Image palettes are used by the GeoServer WMS as way to reduce the size of produced images while maintaining image quality.
See also Paletted Images.
security
The security
directory contains the files used to configure the GeoServer security subsystem. This includes a set of property files which define access roles, along with the services and data each role is authorized to access. See the Security section for more information.
styles
The styles
directory contains Styled Layer Descriptor (SLD) files which contain styling information used by the GeoServer WMS. For each file in this directory there is a corresponding entry in catalog.xml
:
1 <style id="point_style" file="default_point.sld"/>
See the Styling section for more information about styling and SLD .
templates
The templates
directory contains files used by the GeoServer templating subsystem.
Templates are used to customize the output of various GeoServer operations.
See also Freemarker Templates.
user_projections
The user_projections
directory contains a file called epsg.properties
which is used to define custom spatial reference systems that are not part of the official EPSG database.
See also Custom CRS Definitions.
www
The www
directory is used to allow GeoServer to serve files like a regular web server.
The contents of this directory are served at http:/<host:port>/geoserver/www
.
While not a replacement for a full blown web server,
this can be useful for serving client-side mapping applications.
See also Serving Static Files.
Excercise
Navigate the GeoServer Data Directory
Log inot GeoNode as
Administrator
. Then click on the user button on the top right.GeoNode Admin
From the menu, click on the
GeoServer
voice.GeoNode Admin GeoServer
You will be redirected to the
GeoServer
admin interface.GeoServer Admin Gui
Select the
Server Status
topic from the left menu. On the status page note theData Directory
.GeoServer Admin Server Status Page
Open a
Terminal
window and go to the GeoServer Data Directory folder. Navigate the folders and examine the files.Enter the
styles
directory and confirm it is present the filesan_andres_y_providencia_coastline.sld
1
$ cat san_andres_y_providencia_coastline.sld
Steps To Manually Migrate A Layer¶
In this section we are going to save on the mass storage all the information from a GeoNode Layer needed to restore it back later on another GeoNode instance.
Here below a quick summary of the actions to do in order to export manually a GeoNode Layer:
- First of all it is necessary to store the Layer data in a portable format. Since we can’t assume any specific configuration on the target GeoNode (we don’t know if it has been attached to a Database or not, and the connection parameters neither), we need to save the Layer source data in a format feasable to be imported back on any GeoNode. The solution is to export the Layer as an ESRI Shapefile.
- We may need to export also the SLD defined for the Layer. We must store back also the Layer Styles on the storage.
- Once we got all the source data, the final step is to save the Layer Metadata. All the information defined for that Layer must be replicated back on the target instance.
The exercise we are going to execute in 4 steps, consists in:
- Download a Layer as ESRI Shapefile: We will download the Layer
san_andres_y_providencia_coastline
, already configured in GeoNode, as an ESRI ShapeFile. This allows us to store the data in a portable format. - Save and exrpot the Layer SLDs: We will check what are the Styles associated to the
san_andres_y_providencia_coastline
Layer, retrieve the correspondingSLD
files on theGeoServer Data Dir
and save them for a later import. - Save and exrpot the Layer Metadata: We will save the
san_andres_y_providencia_coastline
Layer metadata as an ISOTC211/19115 XML file for a later import. - Import back the Layer: We will cleanup the GeoNode instance, being sure that also GeoServer will be cleaned, and then we will restore back the
san_andres_y_providencia_coastline
Layer using the fiels saved during the previous steps.
1. Download a Layer as ESRI Shapefile¶
Note
In order to execute this exercise, we assume that you have read all the previous sections and your source GeoNode has a Layer already configured as specified on the preliminary notes.
Excercise
Export A Layer As A SHAPEFILE
Warning
You must be GeoNode Administrator in order to successfully execute this exercise.
Identify the real Layer name.
Connect to your local GeoNode instance and explore the available Layers.
GeoNode Explore Layers
Select the Layer San Andreas Coastlines and click on Metadata Edit.
GeoNode Edit Metadata
You will find the real Layer name as subtitle of the Edit Metadada page. Note it somewhere.
GeoNode Layer Real Name
Identify the GeoServer Endpoint.
Identify the GeoServer endpoint. Click on
admin
and the onGeoServer
menù link.GeoServer Endpoint
In our case the GeoServer Endpoint is
http://localhost/geoserver
.GeoServer Web Page
Export the Layer as an ESRI ShapeFile file format.
Open a
Terminal
window and go to/home/geo/Desktop/
.$> cd /home/geo/Desktop/
Terminal: Desktop Folder
Create a new folder
backup
and enter it.$> mkdir backup $> cd backup
Terminal: Backup Folder
Finally export the
geonode:san_andres_y_providencia_coastline
as ShapeFile from GeoServer using theWGET
utility command.$> wget --user=admin --password=admin -O san_andres_y_providencia_coastline.zip "http://localhost/geoserver/geonode/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=geonode:san_andres_y_providencia_coastline&outputFormat=SHAPE-ZIP"
Terminal: Save
san_andres_y_providencia_coastline.zip
Note
Lets examine the command we just executed:
# This is the executable wget
# Those are the GeoNode credentials for the ``admin`` user. Yours maybe different. --user=admin --password=admin
# This is the name of the output file we want to create. It's not important, you can choose anyone. -O san_andres_y_providencia_coastline.zip
# This is the full URL to let GeoServer save the source data as ShapeFile ZIP. "http://localhost/geoserver/geonode/ows?service=WFS&version=1.0.0&request=GetFeature&typeName=geonode:san_andres_y_providencia_coastline&outputFormat=SHAPE-ZIP" # 1. It must be quoted "" # 2. The first part is the GeoServer Endpoint "http://localhost/geoserver" # 3. The "typename" is the real GeoNode Layer name "typeName=geonode:san_andres_y_providencia_coastline" # 4. With "outputFormat" we say to GeoServer how to download data "outputFormat=SHAPE-ZIP"
Unzip the file into the
backup
folder.$> unzip san_andres_y_providencia_coastline.zip
Terminal: Unzip
san_andres_y_providencia_coastline.zip
Check that the ShapeFile has been correctly downloaded. You must see the 4
.dbf
,.prj
,.shp
,.shx
files on thebackup
folder.$> ls -la
Terminal: Check
san_andres_y_providencia_coastline.zip
Note
You can also delete the
san_andres_y_providencia_coastline.zip
file now. It won’t be useful anymore in the future.
2. Save and exrpot the Layer SLDs¶
Note
In order to execute this exercise, we assume that you have read all the previous sections and your source GeoNode has a Layer already configured as specified on the preliminary notes.
Excercise
Export The Layer Default SLD
Warning
By default GeoNode creates an SLD on GeoServer with the same name of the imported Layer and use it as default style. In this exercise we will assume that the Layer uses only the default GeoNode SLD.
Identify the full path of the
GeoServer Data Dir
Click on the
GeoServer
link of the GeoNodeadmin
menu.GeoServer Admin Page
Click on the
Server Status
link of the GeoServeradmin
page.GeoServer Admin Page
Server Status
Copy the full path of the GeoServer Data Dir from the
Server Status
page.GeoServer Admin Page
GeoServer Data Dir
From the Terminal window, copy the
san_andres_y_providencia_coastline.sld
into thebackup
folder.$> cd /home/geo/Desktop/backup/ $> cp /var/lib/tomcat7/webapps/geoserver/data/styles/san_andres_y_providencia_coastline.sld .
Terminal: Copy
san_andres_y_providencia_coastline.sld
Note
Lets examine the command we just executed:
# This is the executable cp
# This is the full path of the ``san_andres_y_providencia_coastline.sld`` file /var/lib/tomcat7/webapps/geoserver/data/styles/san_andres_y_providencia_coastline.sld # 1. Notice that the first part ``/var/lib/tomcat7/webapps/geoserver/data`` is the path of the GeoServer Data Dir # 2. You must add the path ``/styles/`` to the path of the GeoServer Data Dir # 3. The SLD file has the same prefix of the original Layer ``san_andres_y_providencia_coastline.sld``
Check that the SLD has been correctly downloaded. You must see the
.sld
file on thebackup
folder.$> ls -la
Terminal: Check
san_andres_y_providencia_coastline.sld
3. Save and exrpot the Layer Metadata¶
Note
In order to execute this exercise, we assume that you have read all the previous sections and your source GeoNode has a Layer already configured as specified on the preliminary notes.
Excercise
Export A Layer Metadata As An ISOTC211/19115 XML
Warning
You must be GeoNode Administrator in order to successfully execute this exercise.
Get the ISOTC211/19115 XML URL.
Connect to your local GeoNode instance and explore the available Layers.
GeoNode Explore Layers
Select the Layer San Andreas Coastlines and click on Download Metadata.
GeoNode Download Metadata
Click with the right mouse button over the ISO link and then on
Copy Link Location
.Copy Link Location Of ISOTC211/19115 XML
Store the ISOTC211/19115 XML through the
WGET
command.From the Terminal window, goto the
backup
folder of thesan_andres_y_providencia_coastline
file we created before.$> cd /home/geo/Desktop/backup/
Finally export the
san_andres_y_providencia_coastline.xml
as ISOTC211/19115 XML from GeoNode using theWGET
utility command.$> wget --user=admin --password=admin -O san_andres_y_providencia_coastline.xml "http://localhost/catalogue/csw?outputschema=http%3A%2F%2Fwww.isotc211.org%2F2005%2Fgmd&service=CSW&request=GetRecordById&version=2.0.2&elementsetname=full&id=3236a2e0-f023-11e5-86e3-08002779b53d"
Terminal: Save
san_andres_y_providencia_coastline.xml
Note
Lets examine the command we just executed:
# This is the executable wget
# Those are the GeoNode credentials for the ``admin`` user. Yours maybe different. --user=admin --password=admin
# This is the name of the output file we want to create. **It's important** that the file has the same prefix of the ``san_andres_y_providencia_coastline`` Layer name. -O san_andres_y_providencia_coastline.xml
# This is the full URL of the ISOTC211/19115 XML. "http://localhost/catalogue/csw?outputschema=http%3A%2F%2Fwww.isotc211.org%2F2005%2Fgmd&service=CSW&request=GetRecordById&version=2.0.2&elementsetname=full&id=3236a2e0-f023-11e5-86e3-08002779b53d" # 1. It must be quoted "" # 2. The first part is the GeoNode Catalogue Endpoint "http://localhost/catalogue" # 3. With "outputschema" we say to GeoNode how to download metadata "outputschema=http%3A%2F%2Fwww.isotc211.org%2F2005%2Fgmd" # 4. Notice that any Layer in GeoNode is identified by a unique ID "id=3236a2e0-f023-11e5-86e3-08002779b53d", which can be retrieved from the Layer Metadata panel.
Check that the XML has been correctly downloaded. You must see the
.xml
file on thebackup
folder.$> ls -la
Terminal: Check
san_andres_y_providencia_coastline.xml
4. Import back the Layer through the “importlayers” GeoNode Management Command¶
At this point we have everything we need to restore the Layer saved into the /home/geo/Desktop/backup/
folder.
What we are going to do now is:
- Cleanup GeoNode and GeoServer, being sure that the
san_andres_y_providencia_coastline
has been completely removed from all instances. - Restore the
san_andres_y_providencia_coastline
and its Metadata - Restore the
san_andres_y_providencia_coastline
SLD Style
Exercise
Cleanup The san_andres_y_providencia_coastline
Layer
Delete The
san_andres_y_providencia_coastline
Layer.Connect to your local GeoNode instance and explore the available Layers.
GeoNode Explore Layers
Select the Layer San Andreas Coastlines and click on Layer Remove.
GeoNode Layer Remove
Confirm that you want to remove the
san_andres_y_providencia_coastline
Layer.GeoNode Layer Remove Confirm
Double check that the Layer has been removed from GeoServer also.
Connect to your local GeoServer instance Admin GUI, from GeoNode
admin
menu.GeoServer Admin GUI
Double check that the Layer
san_andres_y_providencia_coastline
is no more present on GeoServer Layer list. If so delete it manually.GeoServer Admin GUI: Layers
Double check that the Style
san_andres_y_providencia_coastline
is no more present on GeoServer Style list. If so delete it manually.GeoServer Admin GUI: Styles
Once GeoNode and GeoServer are cleared, import back the
san_andres_y_providencia_coastline
Layer through the “importlayers” GeoNode Management CommandFrom the Terminal window, goto the
backup
folder of thesan_andres_y_providencia_coastline
file we created before.$> cd /home/geonode/geonode/
Finally import the
san_andres_y_providencia_coastline
Layer using the “importlayers” GeoNode Management Command.$> python manage.py importlayers /home/geo/Desktop/backup/
Terminal: Import
san_andres_y_providencia_coastline
Note
Notice that
- The “importlayers” GeoNode Management Command automatically imports all the ShapeFiles available into the
backup
folder - The “importlayers” GeoNode Management Command automatically restores the Layer metadata if the ISOTC211/19115 XML is available on the same folder and has the same layer previx
- The “importlayers” GeoNode Management Command automatically restores the Layer styles if the SLD is available on the same folder and has the same layer previx
- The “importlayers” GeoNode Management Command automatically imports all the ShapeFiles available into the
Double check that the
san_andres_y_providencia_coastline
Layer has been correctly restored into GeoNode.
Loading OSM Data into GeoNode¶
In this section, we will walk through the steps necessary to load OSM data into your GeoNode project. As discussed in previous sections, your GeoNode already uses OSM tiles from MapQuest and the main OSM servers as some of the available base layers. This session is specifically about extracting actual data from OSM and converting it for use in your project and potentially for Geoprocessing tasks.
The first step in this process is to get the data from OSM. We will be using the OSM Overpass API since it lets us do more complex queries than the OSM API itself. You should refer to the OSM Overpass API documentation to learn about all of its features. It is an extremely powerful API that lets you extract data from OSM using a very sophisticated API.
- http://wiki.openstreetmap.org/wiki/Overpass_API
- http://wiki.openstreetmap.org/wiki/Overpass_API/Language_Guide
In this example, we will be extracting building footprint data around Port au Prince in Haiti. To do this we will use an interactive tool that makes it easy construct a Query against the Overpass API. Point your browser at http://overpass-turbo.eu/ and use the search tools to zoom into Port Au Prince and Cite Soleil specifically.
You will need to cut and paste the query specified below to get all of the appropriate data under the bbox:
<osm-script>
<union into="_">
<bbox-query {{bbox}}/>
<recurse into="x" type="node-relation"/>
<query type="way">
<bbox-query {{bbox}}/>
<has-kv k="building" v="yes"></has-kv>
</query>
<recurse into="x" type="way-node"/>
<recurse type="way-relation"/>
</union>
<print mode="meta"/>
</osm-script>
This should look like the following.

When you have the bbox and query set correctly, click the “Export” button on the menu to bring up the export menu, and then click the API interpreter link to download the OSM data base on the query you have specified.

This will download a file named ‘export.osm’ on your file system. You will probably want to rename it something else more specific. You can do that by issuing the following command in the directory where it was downloaded:
$ mv export.osm cite_soleil_buildings.osm
Note
You can also rename the file in your Operating Systems File management tool (Windows Explorer, Finder etc).
Exporting OSM data to shapefile using QGIS¶
Now that we have osm data on our filesystem, we will need to convert it into a format suitable for uploading into your GeoNode. There are many ways to accomplish this, but for purposes of this example, we will use an OSM QGIS plugin that makes if fairly easy. Please consult the wiki page that explains how to install this plugin and make sure it is installed in your QGIS instance. Once its installed, you can use the Web Menu to load your file.

This will bring up a dialog box that you can use to find and convert the osm file we downloaded.

When the process has completed, you will see your layers in the Layer Tree in QGIS.

Since we are only interested in the polygons, we can turn the other 2 layers off in the Layer Tree.

The next step is to use QGIS to convert this layer into a Shapefile so we can upload it into GeoNode. To do this, select the layer in the Layer tree, right click and then select the Save As option.

This will bring up the Save Vector Layer as Dialog.

Specify where on disk you want your file saved, and hit Save then OK.

You now have a shapefile of the data you extracted from OSM that you can use to load into GeoNode. Use the GeoNode Layer Upload form to load the Shapefile parts into your GeoNode, and optionally edit the metadata and then you can view your layer in the Layer Info page in your geonode.

Note
You may want to switch to an imagery layer in order to more easily see the buildings on the OSM background.
Exporting OSM data to shapefile using GDAL¶
An alternative way to export the .osm file to a shapefile is to use ogr2ogr combined with the GDAL osm driver, available from GDAL version 1.10.
As a first step, inspect how the GDAL osm driver sees the .osm file using the ogrinfo command:
$ ogrinfo cite_soleil_buildings.osm
Had to open data source read-only.
INFO: Open of `cite_soleil_buildings.osm'
using driver `OSM' successful.
1: points (Point)
2: lines (Line String)
3: multilinestrings (Multi Line String)
4: multipolygons (Multi Polygon)
5: other_relations (Geometry Collection)
ogrinfo has detected 5 different geometric layers inside the osm data source. As we are just interested in the buildings, you will just export to a new shapefile the multipolygons layer using the GDAL ogr2ogr command utility:
$ ogr2ogr cite_soleil_buildings cite_soleil_buildings.osm multipolygons
Now you can upload the shapefile to GeoNode using the GeoNode Upload form in the same manner as you did in the previous section.
Using GeoGig to load OSM Data into Manage OSM Data¶
Another alternative for working with OSM data in your GeoNode is to use GeoGig. GeoGig is a tool that draws inspiration from Git but adapts its core concepts to handle distributed versioning of geospatial data. GeoGig allows you to load OpenStreetMap data into a repository on your server and either export that data into PostGIS for use in your GeoNode, or configure the GeoGig GeoServer extension and expose the repo directly from GeoServer. An article about this process can be found on the Boundless Geo Blog. and will be described below. Much of the impetus for GeoGig came from the ROGUE JCTD and the technology has been incorporated into the GeoSHAPE project.
Much more information about how to perform the steps below can be found in the GeoGig documentation page on Using GeoGig with OpenStreetMap data. The instructions that follow are only a brief overview of the process.
Getting Started¶
You will first need to install the GeoGig command line tools. These can be found on the projects SourceForge page. Follow the instructions contained in the installation file in order to install the CLI tools on your GeoNode server. Once you have the tools installed and added to your path, the geogig command should be available:
$ geogig --version
Project Version : 1.0-SNAPSHOT
Build Time : December 18, 2014 at 03:59:10 UTC
Build User Name : Unknown
Build User Email : Unknown
Git Branch : master
Git Commit ID : a4a80a8dd853dfe497729b35399594947866e8ae
Git Commit Time : December 18, 2014 at 03:44:24 UTC
Git Commit Author Name : Gabriel Roldan
Git Commit Author Email : gabriel.roldan@gmail.com
Git Commit Message : Synchronize DefaultPlatform.getTempDir() to avoid false precondition check on concurrent access.
Once the geogig command is available, you will need to create an empty repository to hold your data. Change directories to a suitable location on your servers filesystem and issue the following command substituting my_repo for whatever name you choose:
$ cd /somewhere/on/file/system
$ mkdir my_repo
$ geogig init
Initialized empty Geogig repository in /somewhere/on/filesystem/my_repo/.geogig
Loading OSM Data into your Repository¶
Now that you have an empty repository to store our data, the next step is to load the current snapshot of OSM data into your repository using the geogig osm download command. At a minimum, you will want to use a bounding box filter to limit the downloaded data to the area of interest for your geonode installation. The example below is a bounding box that encompasses the country of Malawi. More information about the geogig osm download command can be found in the geogig docs.:
$ geogig osm download --bbox -17.129459 32.668991 -9.364680 35.920441 --saveto ./mw-osm-temp --keep-files
Connecting to http://overpass-api.de/api/interpreter...
Downloaded data will be kept in /somewhere/on/filesystem/my_repo/./mw-osm-temp
Importing into GeoGig repo...
1,164,420
1,164,420 entities processed in 5.892 min
Building trees for [node, way]
Trees built in 7.614 s
0%
Staging features...
100%
Committing features...
100%
Processed entities: 1,164,420.
Nodes: 1,091,572.
Ways: 72,848
GeoGig stores data in trees which are basically equivalent to layers in a normal geospatial context. At this point in the process, your repo contains 2 trees for ways and nodes from OSM. In order to convert these into layers that may be more familiar to your users like roads, buildings or medical facilities, you will need to apply a mapping that filters the complete list of nodes and ways and converts the tags into attributes. There are a great set of sample mappings in the US State Departments CyberGIS project. You can find them here. You should clone this repository along side your geogig repository and then apply them as shown below:
$ geogig osm map ../cybergis-osm-mappings/mappings/basic/buildings_and_roads.json
$ geogig osm map ../cybergis-osm-mappings/mappings/health/medical_centers.json
$ geogig osm map ../cybergis-osm-mappings/mappings/education/schools.json
Now you can inspect the repository using the following commands:
$ geogig ls-tree
osm_roads
osm_buildings
node
way
$ geogig show osm_roads
TREE ID: 596a4f39ab9fadcbba6ffcaf5c135e29c2bc67d3
SIZE: 20288
NUMBER Of SUBTREES: 0
DEFAULT FEATURE TYPE ID: 0f7cbc6c114727858fb50668eb8a4448667bdc12
DEFAULT FEATURE TYPE ATTRIBUTES
id: <LONG>
geom: <LINESTRING>
status: <STRING>
media: <STRING>
name: <STRING>
ref: <STRING>
highway: <STRING>
lanes: <STRING>
oneway: <STRING>
surface: <STRING>
access: <STRING>
source: <STRING>
motor_vehicle: <STRING>
nodes: <STRING>
Updating the OSM Data in your Repository¶
As OpenStreetMap is a constantly changing dataset, you will want to periodically update your repo with the latest changes from OSM. GeoGig provides a way to do this using the geogig osm download command with the –update flag:
$ geogig osm download --update
Note
If you get an error that looks like the error below, this means that there are no changes in OSM since your last update and it can be ignored.
Committing features…
An unhandled error occurred: Nothing to commit after f249200302d5e808fb1b04f329b39b5853ffb7d0. See the log for more details.
Serving your GeoGig repository in GeoNode¶
At this point you have different options on how to serve this repository from your GeoNode. The most basic option is to export this data to PostGIS and configure the PostGIS database in GeoServer for use in GeoNode. First create a PostGIS database that will be used to store the data and then use the geogig pg export command to export the layers into this database for serving. Note you will need to replace the connection parameters below to match you r servers setup:
$ geogig pg export --host localhost --port 5432 -- schema myschema --database my_osm_database --user my_user --password my_password osm_train_stations osm_train_stations
Caution
You may encounter an error when performing this step. This issue has been documented here and is being investigated by the GeoGig team. You will need to create a primary key metadata table as described here and then follow the steps below. If you have questions about this process, please ask for help on the developers mailing list.
Define your new primary key for the table we can’t export (id is the osm id):
INSERT INTO gt_pk_metadata_table (table_schema, table_name, pk_column) VALUES ('geogig_data','osm_train_stations','id');
Next you need to alter your osm data table accordingly:
ALTER TABLE geogig_data.osm_train_stations DROP CONSTRAINT osm_train_stations_pkey;
ALTER TABLE geogig_data.osm_train_stations ADD PRIMARY KEY (id);
ALTER TABLE geogig_data.osm_train_stations DROP COLUMN fid;
ALTER TABLE geogig_data.osm_train_stations ADD COLUMN fid character varying(64);
Then you can run the geogig pg export command with the -o option to overwrite the table:
$ geogig pg export -o --host localhost --port 5432 -- schema myschema --database my_osm_database --user my_user --password my_password osm_train_stations osm_train_stations
At this point, you need to configure your PostGIS database connection in GeoServer. More information about this process can be found in the GeoServer documentation.
Once the layers are configured in GeoServer. You want to issue the updatelayers to configure them in your GeoNode:
$ python manage.py updatelayers --store geogig_data --filter osm_train_stations
Using the GeoGig GeoServer Extension¶
The GeoGig project also contains a GeoServer extension that allows a GeoServer administrator to configure and serve the GeoGig store directly. This extension basically lets you treat your GeoGig repository as any other store of spatial data.
Note
This section is still to be completed.
- Advanced Data Management and Processing
- Advanced Data Management and Processing tecquiques.
- GeoNode Advanced Configuration
- Learn how to deal with advanced GeoNode configuration settings and external DJango Apps.
- GeoNode on Production
- Concepts and tecquinques for the deployment af GeoNode and GeoServer on a Production system.
- GeoNode Customization and Source Code Revision Control
- A workshop teaching how to customize GeoNode projects and put the source code under revision control with GitHub.
- Migrate Data Between GeoNode Instances
- A workshop teaching how to migrate Layers between GeoNode instances.
- GeoNode Overview & Reference
This module guides the user to an overview of GeoNode and its main components.
At the end of this section you will have a clear view of what GeoNode is and can do. You will be able also to use the GeoNode main functionalities and understand some of the basic concepts of the system infrastructure.
- Installation & Admin
This module is more oriented to users having some System Administrator background.
At the end of this section you will be able to setup from scratch the whole GeoNode infrastructure and understand how to the different pieces are interconnected and which are their dependencies.
- Prerequisites
Before proceeding with the reading, it is strongly recommended to be sure having clear the following concepts:
- GeoNode and Django framework basic concepts
- What is Python
- What is a DBMS
- What is a Java Virtual Machine and the JDK
- Linux OS basic shell and maintenance commands
- Basic TCP/IP and networking concepts
- Apache HTTPD Server and WSGI Python bindings
- Users Workshop
This workshop will teach how to use the GeoNode going in depth into what we can do with software application. At the end of this section you will master all the GeoNode sections and entities from a user perspective.
You will know how to:
- Manage users accounts and how to modify them.
- Use and manage the different GeoNode basic resouces.
- Use the GeoNode searching tools to find your resources.
- Manage Layers and Maps, update the styles and publish them.
- Load datasets into GeoNode and keep them synchronized with GeoServer.
- Prerequisites
Before proceeding with the reading, it is strongly recommended to be sure having clear the following concepts:
- GeoNode and Django framework basic concepts
- What is Python
- What is a geospatial server and a basic knowledge of the geospatial web services.
- What is a metadata and a catalog.
- What is a map and a legend.
- Administrators Workshop
This workshop will teach how to install and manage a deployment of the GeoNode software application. At the end of this section you will master all the GeoNode sections and entities from an administrator perspective.
You will know how to:
- Use the GeoNode’s Django Administration Panel.
- Use the console Management Commands for GeoNode.
- Configure and customize your GeoNode installation.
- Prerequisites
Before proceeding with the reading, it is strongly recommended to be sure having clear the following concepts:
- GeoNode and Django framework concepts
- Good knowledge of Python
- Good knowledge of what is a geospatial server and geospatial web services.
- Good knowledge of what is metadata and catalog.
- Good knowledge of HTML and CSS.
- Developers Workshop
This workshop will teach how to develop with and for the GeoNode software application. This module will introduce you to the components that GeoNode is built with, the standards that it supports and the services it provides based on those standards, and an overview its architecture.
- Prerequisites
- GeoNode is a web based GIS tool, and as such, in order to do development on GeoNode itself or to integrate it into your own application, you should be familiar with basic web development concepts as well as with general GIS concepts.
- Advanced Workshop
This module introduces advanced tecquinques and metodologies for the management of the geospatial data and the maintenance and tuning of the servers on Production Environments.
The last sections of the module will teach also you how to add brand new classes and functionalities to your GeoNode installation.
- Prerequisites
- You should be familiar with GeoNode, GeoServer, Python framework and development concepts other than with system administrator and caching concepts and tecnquiques.
Editing the documentation¶
This documentation is written in reStructuredText format and automatically build from this GitHub repository To edit the documentation you’ll need the following tools:
For installation and basic usage of the tools follow these instructions
License Information¶
Documentation¶
Documentation is released under a Creative Commons license with the following conditions.
You are free to Share (to copy, distribute and transmit) and to Remix (to adapt) the documentation under the following conditions:
- Attribution. You must attribute the documentation to the author.
- Share Alike. If you alter, transform, or build upon this work, you may distribute the resulting work only under the same or similar license to this one.
With the understanding that:
- Any of the above conditions can be waived if you get permission from the copyright holder.
- Public Domain. Where the work or any of its elements is in the public domain under applicable law, that status is in no way affected by the license.
Other Rights. In no way are any of the following rights affected by the license:
- Your fair dealing or fair use rights, or other applicable copyright exceptions and limitations;
- The author’s moral rights;
- Rights other persons may have either in the work itself or in how the work is used, such as publicity or privacy rights.
Notice: For any reuse or distribution, you must make clear to others the license terms of this work. The best way to do this is with a link to this web page.
You may obtain a copy of the License at Creative Commons Attribution-ShareAlike 3.0 Unported License
The document is written in reStructuredText format for consistency and portability.
Author Information¶
This documentation was written by GeoSolutions.
The layout for the reStructuredText based documentation is based on the work done by the GeoNode project and the Sphinx framework.
If you have questions, found a bug or have enhancements, please contact us through info@geosolutions.it
- Creative Commons
- Creative Commons Attribution-ShareAlike 3.0 Unported License Creative Commons (CC) is a non-profit organization devoted to expanding the range of creative works available for others to build upon legally and to share. The organization has released several copyright-licenses known as Creative Commons licenses free of charge to the public. These licenses allow creators to communicate which rights they reserve, and which rights they waive for the benefit of recipients or other creators. An easy-to-understand one-page explanation of rights, with associated visual symbols, explains the specifics of each Creative Commons license. Creative Commons licenses do not replace copyright, but are based upon it. They replace individual negotiations for specific rights between copyright owne (licensor) and licensee, which are necessary under an “all rights reserved” copyright management, with a “some rights reserved” management employing standardized licenses for re-use cases where no commercial compensation is sought by the copyright owner. The result is an agile, low-overhead and low-cost copyright-management regime, profiting both copyright owners and licensees.
Social settings¶
SOCIAL_BUTTONS¶
Default:
True
A boolean which specifies whether the social media icons and javascript should be rendered in GeoNode.
SOCIAL_ORIGINS¶
Default:
A list of dictionaries that is used to generate the social links displayed in the Share tab. For each origin, the name and and url format parameters are replaced by the actual values of the ResourceBase object (layer, map, document).
CKAN_ORIGINS¶
Default:
A list of dictionaries that is used to generate the links to CKAN instances displayed in the Share tab. For each origin, the name and and abstract format parameters are replaced by the actual values of the ResourceBase object (layer, map, document). This is not enabled by default. To enabled, uncomment the following line: SOCIAL_ORIGINS.extend(CKAN_ORIGINS).
TWITTER_CARD¶
Default::
True
A boolean that specifies whether Twitter cards are enabled.
TWITTER_SITE¶
Default::
'@GeoNode'
A string that specifies the site to for the twitter:site meta tag for Twitter Cards.
TWITTER_HASHTAGS¶
Default::
['geonode']
A list that specifies the hastags to use when sharing a resource when clicking on a social link.
OPENGRAPH_ENABLED¶
Default::
True
A boolean that specifies whether Open Graph is enabled. Open Graph is used by Facebook and Slack.