pyPhenology

pyPheonology is a software package for building plant phenology models. It has numpy at it’s core, making model building and prediction extremely fast. The core code was written to model phenology observations from the National Phenology Network, where abundant species have several thousand observations. The API is inspired by scikit-learn, so all models can work interchangeably with the same code (eg: Model selection via AIC). pyPhenology is currently used to build the continental scale phenology forecasts on http://phenology.naturecast.org

Build Status License Documentation Status Codecov DOI

Installation

Required dependencies

  • Python 3.4 or later
  • scipy (1.0 or later)
  • numpy (1.8 or later)
  • pandas (0.18.0 or later)
  • joblib (0.12.0 or later)

all these requirements can be installed for Linux, Windows or MacOSX, e.g. by using the package manager conda.

Instructions

pyPhenology supports all operating systems that fullfil the above dependencies and can simply be installed using pip:

pip install pyPhenology

or install the latest version via Github using pip + git:

pip install git+git://github.com/sdtaylor/pyPhenology

Quickstart

Fitting a Thermal Time Model

This short overview will fit a basic ThermalTime model, which has 3 parameters. We will fit the model using data of flowering observations of blueberry (Vaccinium corymbosum) from the Harvard Forest, which can be loaded from the package:

from pyPhenology import models, utils
import numpy as np

observations, predictors = utils.load_test_data(name='vaccinium', phenophase='flowers')

The two objects (observations and predictors) are pandas data.frames. The observations data.frame contains the direct phenology observations as well as an associated site and year for each. The predictors data.frame has several variables used to predict phenology, such as daily mean temperature, latitude and longitude, and daylength. They are listed for each site and year represented in observations. Both of these data.frames are required for building models. Read more about how phenology data is structured in this package here.

Next load the model:

model = models.ThermalTime()

Then fit the model using the vaccinium flowering data:

model.fit(observations, predictors)

The models estimated parameters can be viewed via a method call:

model.get_params()
{'F': 356.9379498434921, 'T': 5.507956473407982, 't1': 27.33480003499163}

Here, the fitted vaccinium flower model has a temperature threshold T of 5.5 degrees C, a starting accumulation day t1 of julian day 27, and total degree day requirement F of 357 units.

The fitted model can also be used to predict the day of flowering based off the fitted parameters. By default it will make predictions on the data used for fitting:

model.predict()
array([126, 126, 127, 127, 126, 129, 129, 127, 132, 132, 133, 133, 132,
  132, 130, 130, 130, 129, 127, 126, 132, 130, 129, 132, 132, 133,
  133, 137, 137, 141, 141, 142, 132, 141, 141, 139, 139, 139, 139,
  137, 137, 141, 141, 141, 141, 142, 142, 142])

Note that model predictions are different than the actual observed predictions. For example the root mean square error of the the predictions is about 3 days:

np.sqrt(((model.predict() - observations.doy)**2).mean())
2.846781808756454

This value can also be calculated by the model directly using Model.score:

model.score()
2.846781808756454

It’s common to fix 1 or more of the parameters in phenology models. For example setting the starting day of warming accumulation, t1, in the Thermal Time model to January 1 (julian day 1). This is done in the model initialization. By default all parameters in a model are estimated. Specifying one as fixed is done with the parameters argument:

model_fixed_t1 = models.ThermalTime(parameters={'t1':1})

Note that the exact parameters to specifcy are different for each model. See the details of them in the Primary Models section.

The model can then be fit as before:

model_fixed_t1.fit(observations, predictors)
model_fixed_t1.get_params()
{'F': 293.4456066438384, 'T': 7.542323552813556, 't1': 1}

This model has a slightly worse error than the prior model, as t1=1 is not optimal for this particular dataset:

model_fixed_t1.score()
3.003470215156683

Finally to save the model for later analysis use save_params():

model.save_params(filename='blueberry_model.json')

The model can be loaded again later using utils.load_saved_model:

model = utils.load_saved_model(filename='blueberry_model.json')

Note that predict() and score() do not work on a loaded model as only the model parameters, and not the data use to fit the model, are saved:

model.predict()
TypeError: No to_predict + temperature passed, and no fitting done. Nothing to predict

But given the data again, or new data, the loaded model can still be used to make predictions:

model.predict(to_predict=observations, predictors=predictors)
array([126, 126, 127, 127, 126, 129, 129, 127, 132, 132, 133, 133, 132,
   132, 130, 130, 130, 129, 127, 126, 132, 130, 129, 132, 132, 133,
   133, 137, 137, 141, 141, 142, 132, 141, 141, 139, 139, 139, 139,
   137, 137, 141, 141, 141, 141, 142, 142, 142])

For a more detailed analysis see the examples on Model selection via AIC or RMSE Evaluation

Data Structure

Your data must be structured in a specific way to be used in the package.

Phenology Observation Data

Observation data consists of the following

  • doy: These are the julian date (1-365) of when a specific phenological event happened.
  • site_id: A site identifier for each doy observation
  • year: A year identifier for each doy observation

These should be structured in columns in a pandas data.frame, where every row is a single observation. For example the built in vaccinium dataset looks like this:

from pyPhenology import models, utils
observations, temp = utils.load_test_data(name='vaccinium')

obserations.head()

                species  site_id  year  doy  phenophase
0  vaccinium corymbosum        1  1991  100         371
1  vaccinium corymbosum        1  1991  100         371
2  vaccinium corymbosum        1  1991  104         371
3  vaccinium corymbosum        1  1998  106         371
4  vaccinium corymbosum        1  1998  106         371

There are extra columns here for the species and phenophase, those will be ignored inside the pyPhenology package.

Phenology Environmental Data

The majority of the models use only daily mean temperature as a driver. This is required for for every day of the winter and spring leading up to the phenophase event. The predictors data.frame should have the following structure.

  • site_id: A site identifier for each location.
  • year: The year of the temperature timeseries
  • temperature: The observed daily mean temperature in degrees Celcius.
  • doy: The julian date of the mean temperature

These should columns be in a data.frame like the observations. The example vaccinium dataset has temperature observations:

predictors.head()
    site_id  temperature  year  doy  latitude  longitude  daylength
0        1        -3.86  1989    0   42.5429   -72.2011       8.94
1        1        -4.71  1989    1   42.5429   -72.2011       8.95
2        1        -1.56  1989    2   42.5429   -72.2011       8.97
3        1        -7.88  1989    3   42.5429   -72.2011       8.98
4        1       -15.24  1989    4   42.5429   -72.2011       9.00

Note than any other columns in the predictors data.frame besides the ones used will be ignored.

Currently two other models use other predictors besides daily mean temerature. The M1 uses daylength as a predictor as well as daily mean temperature. The predictors data.frame should thus have a daylength column in addition to the temperature as shown above.

The Naive model uses only latitude in it’s calculation and thus requires a predictors data.frame with the latitude for every site. For example:

predictors.head()

   site_id   latitude
0      258  39.184269
1      414  44.277962
2      475  47.027077
3      637  44.340950
4      681  41.296783

On the Julian Date

The julian date (usually referenced as DOY for “day of year”) is used throughout the package. This can be negative if referencing something from the prior season. For example consider the following data from the aspen dataset:

predictors.head()

   site_id  temperature  year  doy   latitude   longitude  daylength
0      258         6.28  2009  -67  39.184269 -106.854614      10.52
1      414         8.12  2009  -67  44.277962  -70.315315      10.22
2      475         5.30  2009  -67  47.027077 -114.049248      10.04
3      637         8.30  2009  -67  44.340950  -72.461220      10.22
4      681         9.85  2009  -67  41.296783 -105.574600      10.40

The doy -67 here refers to Oct. 26 for the growing year 2009. Formating dates in this fashion allows for a continuous range of numbers across years, and is common in phenology studies.

January 1 will always be DOY 1.

Notes

  • If you have only a single site, make a “dummy” site_id column set to 1 for both temperature and observation dataframes.
  • If you have only a single year then it still must be represented in the year column of both data.frames.

Models

Model API

All models use the same methods for fitting, prediction, and saving.

Model.fit(observations, predictors[, …]) Estimate the parameters of a model
Model.predict([to_predict, predictors]) Make predictions
Model.score([metric, doy_observed, …]) Evaluate a prediction given observed doy values
Model.save_params(filename[, overwrite]) Save the parameters for a model
Model.get_params() Get the fitted parameters

Primary Models

ThermalTime([parameters]) Thermal Time Model
Alternating([parameters]) Alternating model.
Uniforc([parameters]) Uniforc model
Unichill([parameters]) Unichill two-phase model.
Linear([parameters]) Linear Regression Model
MSB([parameters]) Macroscale Species-specific Budburst model.
Sequential([parameters]) The sequential model
M1([parameters]) The Thermal Time Model with a daylength correction.
FallCooling([parameters]) Fall senesence model
Naive([parameters]) A naive model of the spatially interpolated mean

Ensemble Models

Ensemble(core_models) Fit an ensemble of different models.
BootstrapModel([core_model, num_bootstraps, …]) Fit a model using bootstrapping of the data.
WeightedEnsemble(core_models) Fit an ensemble of many models with associated weights

Controlling Parameter Estimation

By default all parameters in the models are free to vary within their predefined search ranges. The default search ranges are predefined based on being applicable to a wide variety of plants. Initial parameters can be adjusted in two ways.

  • The search range can be ajdusted.
  • Any or all parameters can be set to a fixed value

Both of these are done via the parameters argument in the initial model call.

Setting parameters to fixed values

Here is a common example, where in the ThermalTime model t1, the day when warming accumulation begins, is set to Jan. 1 (doy 1) by setting it to an integer. The other two parameters, F and T, and then estimated:

from pyPhenology import models, utils
observations, temp = utils.load_test_data(name='vaccinium')

model = models.ThermalTime(parameters={'t1':1})
model.fit(observations, temp)
model.get_params()

{'T': 8.6286577557177608, 'F': 156.76212563809247, 't1': 1}

Similarly, we can also set the temperature threshold T to fixed values. Then only F, the total degree days required, is estimated:

model = models.ThermalTime(parameters={'t1':1,'T':5})
model.fit(observations, temp)
model.get_params()

{'F': 274.29110894742541, 't1': 1, 'T': 5}

Note that if you set all the parameters of a model to fixed values then no fitting can be done:

model = models.ThermalTime(parameters={'t1':1,'T':5, 'F':50})
model.fit(observations, temp)

RuntimeError: No parameters to estimate

One more example where the Uniforc model is set to a t1 of 60 (about March 1), and the other parameters are estimated:

model = models.Uniforc(parameters={'t1':60})
model.fit(observations, temp)
model.get_params()

{'F': 11.259894714800524, 'b': -3.1259822030672773, 'c': 9.1700700063424012, 't1': 60}

Setting a search range for parameters

To specify a different search range for a parameter use tuples with a low and high value. These can be mixed and matched with setting fixed values.

For example the Thermal Time model with narrow search range for t1 and F but T fixed at 5 degrees C:

model = models.ThermalTime(parameters={'t1':(-10,10), 'F':(100,500),'T':5})
model.fit(observations, temp)
model.get_params()

{'t1': 4.9538373877994291, 'F': 270.006971948699, 'T': 5}

The above works well for the optimization method Differential Evolution (the default). For the brute force method you can also specify a slice in the form (low, high, step), see Brute Force

Saving and loading models

Parameters from a model can be obtained in a dictionary via the Model.get_params method:

model.get_params()

Fitted models can also be saved to a file:

model.save_params(filename='model_1_parameters.json')

Paremeters are saved to a json file, though the json extension isn’t required.

Saved model files can be loaded again by using utils.load_saved_model

model = utils.load_saved_model('model_1_parameters.json')
model.get_params()
{'t1': 4.9538373877994291, 'F': 270.006971948699, 'T': 5}

Models can also be loaded by passing the filename as the parameters argument in the model initialization. Note that this requires the model being initialized and the saved model to match:

model = models.ThermalTime(parameters = 'model_1_parameters.json')

Optimizer Methods

Parameters of phenology models have a complex search space and are commonly fit with global optimization algorithms. To estimate parameters pyPhenology uses optimizers built-in to scipy. Optimizers available are:

  • Differential evolution (the default)
  • Basin hopping
  • Brute force

Changing the optimizer

The optimizer can be specified in the fit method by using the codes DE, BH, or BF for differential evolution, basin hopping, or brute force, respectively:

from pyPhenology import models, utils
observations, temp = utils.load_test_data(name='vaccinium')

model = models.ThermalTime(parameters={})
model.fit(observations, temp, method='DE')

Optimizer Arguments

Arguments to the optimization algorithm are crucial to model fitting. These control things like the maximimum number of iterations and how to specify convergence. Ideally one should choose arguments which find the “true” solution, yet this is a tradeoff of time and resources available. Models with a large number of parameters (such as the Unichill model) can take several hours to days to fit if the optimizer arguments are set liberally.

Optimizer arguments can be set two ways. The first is using some preset defaults:

model.fit(observations, temp, method='DE', optimizer_params='practical')
  • testing Designed to be quick for testing code. Results from this should not be used for analysis.
  • practical Default. Should produce realistic results on desktop systems in a relatively short period.
  • intensive Designed to find the absolute optimal solution. Can potentially take hours to days on large datasets.

The 2nd is using a dictionary for customized optimizer arguments:

model.fit(observations, temp, method='DE',optimizer_params={'popsize': 50,
                                                            'maxiter': 5000})

All the arguments in the scipy optimizer functions are available via the optimizer_params argument in fit. The important ones are described below, but also look at the available options in the scipy documentation. Any arguments not set will be set to the default specifed in the scipy package. Note that the three presets can be used with all optimizer methods, but using custimized methods will only work for that specific method. For example the popsize argument above will only work with method DE.

Optimizers

Differential Evolution

Differential evolution uses a population of models each randomly initialized to different parameter values within the respective search spaces. Each “member” is adjusted slightly based on the performance of the best model. This process is repeated until the maxiter value is reached or a convergence threshold is met.

Differential evolution Scipy documentation

Differential Evolution Key arguments

See the official documentation for more in depth details.

  • maxiter : int
    the maximum number of itations. higher means potentially longer fitting times.
  • popsize : int
    total population size of randomly initialized models. higher means longer fitting times.
  • mutation : float, or tuple
    mutation constant. must be 0 < x < 2. Can be a constant (float) or a range (tuple). Higher mean longer fitting times.
  • recombination : float
    probability of a member progressing to the next generation. must be 0 < x < 1. Lower means longer fitting times.
  • disp : boolean
    Display output as the model is fit.
Differential Evolution Presets
  • testing:

    {'maxiter':5,
     'popsize':10,
     'mutation':(0.5,1),
     'recombination':0.25,
     'disp':False}
    
  • practical:

    {'maxiter':1000,
     'popsize':50,
     'mutation':(0.5,1),
     'recombination':0.25,
     'disp':False}
    
  • intensive:

    {'maxiter':10000,
     'popsize':100,
     'mutation':(0.1,1),
     'recombination':0.25,
     'disp':False}
    

Basin Hopping

Basin hopping first makes a random guess at the parameters, then finds the local minima using L-BFGS-B. From the local minima the parameters are then randomly perturbed and minimized again, with this new estimate accepted using a Metropolis criterion. This is repeated until niter is reached. The parameters with the best score are selected. Basin hopping is essentially the same as simulated annealing, but with the added step of finding the local minima between random perturbations. Simulated annealing was retired from the Scipy packge in favor of basin hopping, see the note here.

Basin Hopping Scipy documentation

Basin Hopping Key arguments

See the official documentation for more in depth details.

  • niter : int
    the number of itations. higher means potentially longer fitting times.
  • T : float
    The “temperature” parameter for the accept or reject criterion.
  • stepsize : float
    The size of the random perturbations
  • disp : boolean
    Display output as the model is fit.
Basin Hopping Presets
  • testing:

    {'niter': 100,
     'T': 0.5,
     'stepsize': 0.5,
     'disp': False}
    
  • practical:

    {'niter': 50000,
     'T': 0.5,
     'stepsize': 0.5,
     'disp': False}
    
  • intensive:

    {'niter': 500000,
     'T': 0.5,
     'stepsize': 0.5,
     'disp': False}
    

Brute Force

Brute force is a comprehensive search within predefined parameter ranges.

Brute force Scipy documentation

Brute Force Key Arguments

See the official documentation for more details

  • Ns : int
    Number of grid points within search space to search over. See below.
  • finish : function
    Function to find the local best solution from the best search space solution. This is set to optimize.fmin_bfgs in the presets, which is the scipy bfgs minimizer. See more options here.
  • disp : boolean
    Display output as the model is fit.
Brute Force Presets
  • testing:

    {'Ns':2,
     'finish':optimize.fmin_bfgs,
     'disp':False}
    
  • practical:

    {'Ns':20,
     'finish':optimize.fmin_bfgs,
     'disp':False}
    
  • intensive:

    {'Ns':40,
     'finish':optimize.fmin_bfgs,
     'disp':False}
    
Brute Force Notes

The Ns argument defines the number of points to test with each search parameter. For example consider the following search spaces for a three parameter model:

{'t1': (-10,10), 'T':(0,10), 'F': (0,1000),}

Using Ns=20 will search all combinations of:

t1=[-10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
T=[0.0, 0.5, 1.0, 1.5, 2.0, 2.5, 3.0, 3.5, 4.0, 4.5, 5.0, 5.5, 6.0, 6.5, 7.0, 7.5, 8.0, 8.5, 9.0, 9.5]
F=[0, 50, 100, 150, 200, 250, 300, 350, 400, 450, 500, 550, 600, 650, 700, 750, 800, 850, 900, 950]

which results in \(20^3\) model evalutaions. In this way model fitting time increases exponentially with the number of parameters in a model.

Alternatively you can set the search range using slices of (low, high, step) instead of (low,high). This allows for more control over the search space for each paramter. For example:

{'t1': slice(-10, 10, 1),'T': slice(0,10, 1),'F': slice(0,1000, 5)}

Note that using slices this way only works for the brute force method. This can create more realistic search space for each parameter. But in this example the number of evalutaions is still high, \(20*10*200=40000\). It’s recommended that Brute Force is only used for models with a low number of parameters, otherwise Differential Evolution is quicker and more robust.

Utilities

Model Loading

Models can be loaded individually using the base classes:

from pyPhenology import models
model1 = models.ThermalTime()
model2 = models.Alternating()

or with a string of the same name via load_model. Note that they must be initialized after loading, which allows you to set the parameters:

from pyPhenology import utils
Model1 = utils.load_model('ThermalTime')
Model2 = utils.load_model('Alternating')
model1 = Model1()
model2 = Model2()

The BootstrapModel must still be loaded directly. But it can accept core models loaded via load_model:

model = models.BootstrapModel(core_model = utils.load_model('Alternating'))

Test Data

Two sets of observations are available for use in the package as well as associated mean daily temperature derived from the PRISM dataset. The data is in pandas data.frames as outlined in data structures.

The first is observations of Vaccinium corymbosum from Harvard Forest, with both flower and budburst phenophases.

from pyPhenology import utils
observations, predictors = utils.load_test_data(name='vaccinium',
                                                phenophase='budburst')

observations.head()

                species  site_id  year  doy  phenophase
0  vaccinium corymbosum        1  1991  100         371
1  vaccinium corymbosum        1  1991  100         371
2  vaccinium corymbosum        1  1991  104         371
3  vaccinium corymbosum        1  1998  106         371
4  vaccinium corymbosum        1  1998  106         371

predictors.head()

   site_id  temperature    year  doy
0        1        -3.86  1989.0  0.0
1        1        -4.71  1989.0  1.0
2        1        -1.56  1989.0  2.0
3        1        -7.88  1989.0  3.0
4        1       -15.24  1989.0  4.0

observations, predictors = utils.load_test_data(name='vaccinium',
                                                phenophase='flowers')

                 species  site_id  year  doy  phenophase
48  vaccinium corymbosum        1  1998  122         501
49  vaccinium corymbosum        1  1998  122         501
50  vaccinium corymbosum        1  1991  124         501
51  vaccinium corymbosum        1  1991  124         501
52  vaccinium corymbosum        1  1998  126         501

The second is Populus tremuloides (Aspen) from the National Phenology Network, and also has flowers and budburst phenophases available. This has observations across many sites, making it a suitable test for spatially explicit models.

observations, predictors = utils.load_test_data(name='aspen',
                                                phenophase='budburst')

observations.head()
               species  site_id  year  doy  phenophase
0  populus tremuloides    16374  2014   44         371
1  populus tremuloides     2330  2011   47         371
2  populus tremuloides     2330  2010   48         371
3  populus tremuloides     1020  2009   73         371
4  populus tremuloides    22332  2016   72         371

Examples

Model selection via AIC

This will fit the vaccinium budburst data to 3 different models, and choose the best performing one via AIC

from pyPhenology import utils
import numpy as np

models_to_test = ['ThermalTime','Alternating','Linear']

observations, predictors = utils.load_test_data(name='vaccinium',
                                                phenophase='budburst')

observations_test = observations[0:10]
observations_train = observations[10:]

# AIC based off mean sum of squares
def aic(obs, pred, n_param):
    return len(obs) * np.log(np.mean((obs - pred)**2)) + 2*(n_param + 1)

best_aic=np.inf
best_base_model = None
best_base_model_name = None

for model_name in models_to_test:
    Model = utils.load_model(model_name)
    model = Model()
    model.fit(observations_train, predictors, optimizer_params='practical')
    
    model_aic = aic(obs = observations_test.doy.values,
                    pred = model.predict(observations_test,
                                         predictors),
                    n_param = len(model.get_params()))
    
    if model_aic < best_aic:
        best_model = model
        best_model_name = model_name
        best_aic = model_aic
        
    print('model {m} got an aic of {a}'.format(m=model_name,a=model_aic))
    
print('Best model: {m}'.format(m=best_model_name))
print('Best model paramters:')
print(best_model.get_params())

Output:

model ThermalTime got an aic of 55.51000634199631
model Alternating got an aic of 60.45760650906022
model Linear got an aic of 64.01178179718035
Best model: ThermalTime
Best model paramters:
{'t1': 90.018369435585129, 'T': 2.7067432485899765, 'F': 181.66471096956883}

RMSE Evaluation

This will calculate the root mean square error (RMSE) on 2 species, each with a budburst and flower phenophase, using 2 models. Both are Thermal Time models with a start date of 1 (Jan. 1), and the temperature threshold is 0 for one and 5 for the other.

from pyPhenology import utils, models
import numpy as np
import pandas as pd

datasets_to_use = ['vaccinium','aspen']
phenophases_to_use = ['budburst','flowers']

num_boostraps=5

# Two Thermal Time models with fixed start day of Jan 1, and 
# with different fixed temperature thresholds.
# Each getting variation using 50 bootstraps.
bootstrapped_tt_model_1 = models.BootstrapModel(core_model=models.ThermalTime,
                                                num_bootstraps=num_boostraps,
                                                parameters={'t1':1,
                                                            'T':0})

bootstrapped_tt_model_2 = models.BootstrapModel(core_model=models.ThermalTime,
                                                num_bootstraps=num_boostraps,
                                                parameters={'t1':1,
                                                            'T':5})

models_to_fit = {'TT Temp 0':bootstrapped_tt_model_1,
                 'TT Temp 5':bootstrapped_tt_model_2}

results = pd.DataFrame()

for dataset in datasets_to_use:
    for phenophase in phenophases_to_use:
        
        observations, predictors = utils.load_test_data(name=dataset,
                                                        phenophase=phenophase)
        
        # Setup 20% train test split using pandas methods
        observations_test = observations.sample(frac=0.2,
                                                random_state=1)
        observations_train = observations[~observations.index.isin(observations_test.index)]
        
        observed_doy = observations_test.doy.values
        
        for model_name, model in models_to_fit.items():
            model.fit(observations_train, predictors, optimizer_params='practical')
            
            # Using aggregation='none' in BoostrapModel predict
            # returns results for all bootstrapped models in an
            # (num_bootstraps, n_samples) array. This will calculate
            # the RMSE of each model and var variation around that. 
            predicted_doy = model.predict(observations_test, predictors, aggregation='none')

            rmse = np.sqrt(np.mean( (predicted_doy - observed_doy)**2, axis=1))
            
            results_this_set = pd.DataFrame()
            results_this_set['rmse'] = rmse
            results_this_set['dataset'] = dataset
            results_this_set['phenophase'] = phenophase
            results_this_set['model'] = model_name

            results = results.append(results_this_set, ignore_index=True)


from plotnine import *

(ggplot(results, aes(x='model', y='rmse')) + 
     geom_boxplot() + 
     facet_grid('dataset~phenophase', scales='free_y'))
https://i.imgur.com/vTdKOdO.png

API

Details for anything not outlined in Model API.

Utilities

utils.load_model(name) Load a model via a string
utils.load_test_data([name, phenophase]) Pre-loaded phenology data
utils.load_saved_model(filename) Load a previously saved model file

Get in touch

See the GitHub Repo to see the source code or submit issues and feature requests.

License

pyPhenology uses the open source MIT License

Citation

If you use this software in your research please cite it as

Taylor, S. D. (2018). pyPhenology: A python framework for plant phenology modelling. Journal of Open Source Software, 3(28), 827. https://doi.org/10.21105/joss.00827

Bibtex:

@article{Taylor2018,
author = {Taylor, Shawn David},
doi = {10.21105/joss.00827},
journal = {Journal of Open Source Software},
mendeley-groups = {Software/Data},
month = {aug},
number = {28},
pages = {827},
title = {{pyPhenology: A python framework for plant phenology modelling}},
url = {http://joss.theoj.org/papers/10.21105/joss.00827},
volume = {3},
year = {2018}
}

Acknowledgments

Development of this software was funded by the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative through Grant GBMF4563 to Ethan P. White.