Welcome to Kubeflow Pipelines SDK API reference¶
Main documentation: https://www.kubeflow.org/docs/pipelines/
Source code: https://github.com/kubeflow/pipelines/
kfp package¶
kfp.compiler package¶
-
class
kfp.compiler.
Compiler
[source]¶ Bases:
object
DSL Compiler.
It compiles DSL pipeline functions into workflow yaml. Example usage: ```python @dsl.pipeline(
name=’name’, description=’description’) def my_pipeline(a: dsl.PipelineParam, b: dsl.PipelineParam):
passCompiler().compile(my_pipeline, ‘path/to/workflow.yaml’) ```
-
compile
(pipeline_func, package_path, type_check=True)[source]¶ Compile the given pipeline function into workflow yaml.
Parameters: - pipeline_func – pipeline functions with @dsl.pipeline decorator.
- package_path – the output workflow tar.gz file path. for example, “~/a.tar.gz”
- type_check – whether to enable the type check or not, default: False.
-
-
class
kfp.compiler.
VersionedDependency
(name, version=None, min_version=None, max_version=None)[source]¶ Bases:
object
DependencyVersion specifies the versions
-
max_version
¶
-
min_version
¶
-
name
¶
-
-
kfp.compiler.
build_docker_image
(staging_gcs_path, target_image, dockerfile_path, timeout=600, namespace='kubeflow')[source]¶ build_docker_image automatically builds a container image based on the specification in the dockerfile and pushes to the target_image.
Parameters: - staging_gcs_path (str) – GCS blob that can store temporary build files
- target_image (str) – gcr path to push the final image
- dockerfile_path (str) – local path to the dockerfile
- timeout (int) – the timeout for the image build(in secs), default is 600 seconds
- namespace (str) – the namespace within which to run the kubernetes kaniko job, default is “kubeflow”
-
kfp.compiler.
build_python_component
(component_func, target_image, base_image=None, dependency=[], staging_gcs_path=None, build_image=True, timeout=600, namespace='kubeflow', target_component_file=None, python_version='python3')[source]¶ build_component automatically builds a container image for the component_func based on the base_image and pushes to the target_image.
Parameters: - component_func (python function) – The python function to build components upon
- base_image (str) – Docker image to use as a base image
- target_image (str) – Full URI to push the target image
- staging_gcs_path (str) – GCS blob that can store temporary build files
- target_image – target image path
- build_image (bool) – whether to build the image or not. Default is True.
- timeout (int) – the timeout for the image build(in secs), default is 600 seconds
- namespace (str) – the namespace within which to run the kubernetes kaniko job, default is “kubeflow”
- dependency (list) – a list of VersionedDependency, which includes the package name and versions, default is empty
- python_version (str) – choose python2 or python3, default is python3
Raises: ValueError
– The function is not decorated with python_component decorator or the python_version is neither python2 nor python3
kfp.components package¶
-
class
kfp.components.
ComponentStore
(local_search_paths=None, url_search_prefixes=None)[source]¶ Bases:
object
-
load_component
(name, digest=None, tag=None)[source]¶ Loads component local file or URL and creates a task factory function
Search locations: <local-search-path>/<name>/component.yaml <url-search-prefix>/<name>/component.yaml
If the digest is specified, then the search locations are: <local-search-path>/<name>/versions/sha256/<digest> <url-search-prefix>/<name>/versions/sha256/<digest>
If the tag is specified, then the search locations are: <local-search-path>/<name>/versions/tags/<digest> <url-search-prefix>/<name>/versions/tags/<digest>
Parameters: - name – Component name used to search and load the component artifact containing the component definition. Component name usually has the following form: group/subgroup/component
- digest – Strict component version. SHA256 hash digest of the component artifact file. Can be used to load a specific component version so that the pipeline is reproducible.
- tag – Version tag. Can be used to load component version from a specific branch. The version of the component referenced by a tag can change in future.
Returns: A factory function with a strongly-typed signature. Once called with the required arguments, the factory constructs a pipeline task instance (ContainerOp).
-
-
kfp.components.
func_to_component_text
(func, extra_code='', base_image='tensorflow/tensorflow:1.11.0-py3')[source]¶ Converts a Python function to a component definition and returns its textual representation
Function docstring is used as component description. Argument and return annotations are used as component input/output types. To declare a function with multiple return values, use the NamedTuple return annotation syntax:
from typing import NamedTuple def add_multiply_two_numbers(a: float, b: float) -> NamedTuple(‘DummyName’, [(‘sum’, float), (‘product’, float)]):
“”“Returns sum and product of two arguments”“” return (a + b, a * b)Parameters: - func – The python function to convert
- base_image – Optional. Specify a custom Docker container image to use in the component. For lightweight components, the image needs to have python 3.5+. Default is tensorflow/tensorflow:1.11.0-py3 Note: The image can also be specified by decorating the function with the @python_component decorator. If different base images are explicitly specified in both places, an error is raised.
- extra_code – Optional. Extra code to add before the function code. Can be used as workaround to define types used in function signature.
Returns: Textual representation of a component definition
-
kfp.components.
func_to_container_op
(func, output_component_file=None, base_image='tensorflow/tensorflow:1.11.0-py3', extra_code='')[source]¶ Converts a Python function to a component and returns a task (ContainerOp) factory
Function docstring is used as component description. Argument and return annotations are used as component input/output types. To declare a function with multiple return values, use the NamedTuple return annotation syntax:
from typing import NamedTuple def add_multiply_two_numbers(a: float, b: float) -> NamedTuple(‘DummyName’, [(‘sum’, float), (‘product’, float)]):
“”“Returns sum and product of two arguments”“” return (a + b, a * b)Parameters: - func – The python function to convert
- base_image – Optional. Specify a custom Docker container image to use in the component. For lightweight components, the image needs to have python 3.5+. Default is tensorflow/tensorflow:1.11.0-py3 Note: The image can also be specified by decorating the function with the @python_component decorator. If different base images are explicitly specified in both places, an error is raised.
- output_component_file – Optional. Write a component definition to a local file. Can be used for sharing.
- extra_code – Optional. Extra code to add before the function code. Can be used as workaround to define types used in function signature.
Returns: A factory function with a strongly-typed signature taken from the python function. Once called with the required arguments, the factory constructs a pipeline task instance (ContainerOp) that can run the original function in a container.
-
kfp.components.
load_component
(filename=None, url=None, text=None)[source]¶ Loads component from text, file or URL and creates a task factory function
Only one argument should be specified.
Parameters: - filename – Path of local file containing the component definition.
- url – The URL of the component file data
- text – A string containing the component file data.
Returns: A factory function with a strongly-typed signature. Once called with the required arguments, the factory constructs a pipeline task instance (ContainerOp).
-
kfp.components.
load_component_from_file
(filename)[source]¶ Loads component from file and creates a task factory function
Parameters: filename – Path of local file containing the component definition. Returns: A factory function with a strongly-typed signature. Once called with the required arguments, the factory constructs a pipeline task instance (ContainerOp).
-
kfp.components.
load_component_from_text
(text)[source]¶ Loads component from text and creates a task factory function
Parameters: text – A string containing the component file data. Returns: A factory function with a strongly-typed signature. Once called with the required arguments, the factory constructs a pipeline task instance (ContainerOp).
-
kfp.components.
load_component_from_url
(url)[source]¶ Loads component from URL and creates a task factory function
Parameters: url – The URL of the component file data Returns: A factory function with a strongly-typed signature. Once called with the required arguments, the factory constructs a pipeline task instance (ContainerOp).
kfp.components.structures subpackage¶
kfp.components.structures package¶
kfp.components.structures.kubernetes package¶
-
class
kfp.components.structures.kubernetes.v1.
Container
(image: Optional[str] = None, command: Optional[List[str]] = None, args: Optional[List[str]] = None, env: Optional[List[kfp.components.structures.kubernetes.v1.EnvVar]] = None, working_dir: Optional[str] = None, lifecycle: Optional[kfp.components.structures.kubernetes.v1.Lifecycle] = None, volume_mounts: Optional[List[kfp.components.structures.kubernetes.v1.VolumeMount]] = None, resources: Optional[kfp.components.structures.kubernetes.v1.ResourceRequirements] = None, ports: Optional[List[kfp.components.structures.kubernetes.v1.ContainerPort]] = None, volume_devices: Optional[List[kfp.components.structures.kubernetes.v1.VolumeDevice]] = None, name: Optional[str] = None, image_pull_policy: Optional[str] = None, liveness_probe: Optional[kfp.components.structures.kubernetes.v1.Probe] = None, readiness_probe: Optional[kfp.components.structures.kubernetes.v1.Probe] = None, security_context: Optional[kfp.components.structures.kubernetes.v1.SecurityContext] = None, stdin: Optional[bool] = None, stdin_once: Optional[bool] = None, termination_message_path: Optional[str] = None, termination_message_policy: Optional[str] = None, tty: Optional[bool] = None)[source]¶ Bases:
kfp.components.modelbase.ModelBase
kfp.dsl package¶
-
class
kfp.dsl.
Condition
(condition)[source]¶ Bases:
kfp.dsl._ops_group.OpsGroup
Represents an condition group with a condition.
Example usage: ```python with Condition(param1==’pizza’):
op1 = ContainerOp(…) op2 = ContainerOp(…)
-
class
kfp.dsl.
ContainerOp
(name: str, image: str, command: Union[str, List[str]] = None, arguments: Union[str, List[str]] = None, sidecars: List[kfp.dsl._container_op.Sidecar] = None, container_kwargs: Dict[KT, VT] = None, file_outputs: Dict[str, str] = None, output_artifact_paths: Dict[str, str] = None, is_exit_handler=False, pvolumes: Dict[str, kubernetes.client.models.v1_volume.V1Volume] = None)[source]¶ Bases:
kfp.dsl._container_op.BaseOp
Represents an op implemented by a container image.
Example
from kfp import dsl from kubernetes.client.models import V1EnvVar
- @dsl.pipeline(
- name=’foo’, description=’hello world’)
def foo_pipeline(tag: str, pull_image_policy: str):
# any attributes can be parameterized (both serialized string or actual PipelineParam) op = dsl.ContainerOp(name=’foo’,
image=’busybox:%s’ % tag, # pass in sidecars list sidecars=[dsl.Sidecar(‘print’, ‘busybox:latest’, command=’echo “hello”’)], # pass in k8s container kwargs container_kwargs={‘env’: [V1EnvVar(‘foo’, ‘bar’)]})# set imagePullPolicy property for container with PipelineParam op.container.set_pull_image_policy(pull_image_policy)
# add sidecar with parameterized image tag # sidecar follows the argo sidecar swagger spec op.add_sidecar(dsl.Sidecar(‘redis’, ‘redis:%s’ % tag).set_image_pull_policy(‘Always’))
-
arguments
¶
-
command
¶
-
container
¶ Container object that represents the container property in io.argoproj.workflow.v1alpha1.Template. Can be used to update the container configurations.
Example
import kfp.dsl as dsl from kubernetes.client.models import V1EnvVar
@dsl.pipeline(name=’example_pipeline’) def immediate_value_pipeline():
- op1 = (dsl.ContainerOp(name=’example’, image=’nginx:alpine’)
- .container
- .add_env_variable(V1EnvVar(name=’HOST’, value=’foo.bar’)) .add_env_variable(V1EnvVar(name=’PORT’, value=‘80’)) .parent # return the parent ContainerOp
)
-
env_variables
¶
-
image
¶
-
class
kfp.dsl.
ExitHandler
(exit_op: kfp.dsl._container_op.ContainerOp)[source]¶ Bases:
kfp.dsl._ops_group.OpsGroup
Represents an exit handler that is invoked upon exiting a group of ops.
Example usage: ```python exit_op = ContainerOp(…) with ExitHandler(exit_op):
op1 = ContainerOp(…) op2 = ContainerOp(…)
-
class
kfp.dsl.
PipelineParam
(name: str, op_name: str = None, value: str = None, param_type: kfp.dsl._metadata.TypeMeta = <kfp.dsl._metadata.TypeMeta object>, pattern: str = None)[source]¶ Bases:
object
Representing a future value that is passed between pipeline components.
A PipelineParam object can be used as a pipeline function argument so that it will be a pipeline parameter that shows up in ML Pipelines system UI. It can also represent an intermediate value passed between components.
-
full_name
¶ Unique name in the argo yaml for the PipelineParam
-
-
class
kfp.dsl.
PipelineVolume
(pvc: str = None, volume: kubernetes.client.models.v1_volume.V1Volume = None, **kwargs)[source]¶ Bases:
kubernetes.client.models.v1_volume.V1Volume
Representing a volume that is passed between pipeline operators and is to be mounted by a ContainerOp or its inherited type.
A PipelineVolume object can be used as an extention of the pipeline function’s filesystem. It may then be passed between ContainerOps, exposing dependencies.
-
class
kfp.dsl.
ResourceOp
(k8s_resource=None, action: str = 'create', merge_strategy: str = None, success_condition: str = None, failure_condition: str = None, attribute_outputs: Dict[str, str] = None, **kwargs)[source]¶ Bases:
kfp.dsl._container_op.BaseOp
Represents an op which will be translated into a resource template
-
resource
¶ Resource object that represents the resource property in io.argoproj.workflow.v1alpha1.Template.
-
-
class
kfp.dsl.
Sidecar
(name: str, image: str, command: Union[str, List[str]] = None, args: Union[str, List[str]] = None, mirror_volume_mounts: bool = None, **kwargs)[source]¶ Bases:
kfp.dsl._container_op.Container
Represents an argo workflow sidecar (io.argoproj.workflow.v1alpha1.Sidecar) to be used in sidecars property in argo’s workflow template (io.argoproj.workflow.v1alpha1.Template).
Sidecar inherits from Container class with an addition of mirror_volume_mounts attribute (mirrorVolumeMounts property).
See https://github.com/argoproj/argo/blob/master/api/openapi-spec/swagger.json
Example
from kfp.dsl import ContainerOp, Sidecar
# creates a ContainerOp and adds a redis Sidecar op = (ContainerOp(name=’foo-op’, image=’busybox:latest’)
- .add_sidecar(
- Sidecar(name=’redis’, image=’redis:alpine’)))
-
attribute_map
= {'args': 'args', 'command': 'command', 'env': 'env', 'env_from': 'envFrom', 'image': 'image', 'image_pull_policy': 'imagePullPolicy', 'lifecycle': 'lifecycle', 'liveness_probe': 'livenessProbe', 'mirror_volume_mounts': 'mirrorVolumeMounts', 'name': 'name', 'ports': 'ports', 'readiness_probe': 'readinessProbe', 'resources': 'resources', 'security_context': 'securityContext', 'stdin': 'stdin', 'stdin_once': 'stdinOnce', 'termination_message_path': 'terminationMessagePath', 'termination_message_policy': 'terminationMessagePolicy', 'tty': 'tty', 'volume_devices': 'volumeDevices', 'volume_mounts': 'volumeMounts', 'working_dir': 'workingDir'}¶
-
inputs
¶ A list of PipelineParam found in the Sidecar object.
-
set_mirror_volume_mounts
(mirror_volume_mounts=True)[source]¶ Setting mirrorVolumeMounts to true will mount the same volumes specified in the main container to the sidecar (including artifacts), at the same mountPaths. This enables dind daemon to partially see the same filesystem as the main container in order to use features such as docker volume binding.
Parameters: mirror_volume_mounts – boolean flag
-
swagger_types
= {'args': 'list[str]', 'command': 'list[str]', 'env': 'list[V1EnvVar]', 'env_from': 'list[V1EnvFromSource]', 'image': 'str', 'image_pull_policy': 'str', 'lifecycle': 'V1Lifecycle', 'liveness_probe': 'V1Probe', 'mirror_volume_mounts': 'bool', 'name': 'str', 'ports': 'list[V1ContainerPort]', 'readiness_probe': 'V1Probe', 'resources': 'V1ResourceRequirements', 'security_context': 'V1SecurityContext', 'stdin': 'bool', 'stdin_once': 'bool', 'termination_message_path': 'str', 'termination_message_policy': 'str', 'tty': 'bool', 'volume_devices': 'list[V1VolumeDevice]', 'volume_mounts': 'list[V1VolumeMount]', 'working_dir': 'str'}¶
-
class
kfp.dsl.
VolumeOp
(resource_name: str = None, size: str = None, storage_class: str = None, modes: List[str] = ['ReadWriteMany'], annotations: Dict[str, str] = None, data_source=None, **kwargs)[source]¶ Bases:
kfp.dsl._resource_op.ResourceOp
Represents an op which will be translated into a resource template which will be creating a PVC.
-
class
kfp.dsl.
VolumeSnapshotOp
(resource_name: str = None, pvc: str = None, snapshot_class: str = None, annotations: Dict[str, str] = None, volume: kubernetes.client.models.v1_volume.V1Volume = None, **kwargs)[source]¶ Bases:
kfp.dsl._resource_op.ResourceOp
Represents an op which will be translated into a resource template which will be creating a VolumeSnapshot.
At the time that this feature is written, VolumeSnapshots are an Alpha feature in Kubernetes. You should check with your Kubernetes Cluster admin if they have it enabled.
-
kfp.dsl.
component
(func)[source]¶ Decorator for component functions that returns a ContainerOp. This is useful to enable type checking in the DSL compiler
Usage: ```python @dsl.component def foobar(model: TFModel(), step: MLStep()):
return dsl.ContainerOp()
-
kfp.dsl.
get_pipeline_conf
()[source]¶ Configure the pipeline level setting to the current pipeline Note: call the function inside the user defined pipeline function.
-
kfp.dsl.
graph_component
(func)[source]¶ Decorator for graph component functions. This decorator returns an ops_group.
Usage: ```python import kfp.dsl as dsl @dsl.graph_component def flip_component(flip_result):
print_flip = PrintOp(flip_result) flipA = FlipCoinOp().after(print_flip) with dsl.Condition(flipA.output == ‘heads’):
flip_component(flipA.output)return {‘flip_result’: flipA.output}
-
kfp.dsl.
pipeline
(name, description)[source]¶ Decorator of pipeline functions.
name=’my awesome pipeline’, description=’Is it really awesome?’) def my_pipeline(a: PipelineParam, b: PipelineParam):
…
-
kfp.dsl.
python_component
(name, description=None, base_image=None, target_component_file: str = None)[source]¶ Decorator for Python component functions. This decorator adds the metadata to the function object itself.
Parameters: - name – Human-readable name of the component
- description – Optional. Description of the component
- base_image – Optional. Docker container image to use as the base of the component. Needs to have Python 3.5+ installed.
- target_component_file – Optional. Local file to store the component definition. The file can then be used for sharing.
Returns: The same function (with some metadata fields set).
Usage: ```python @dsl.python_component(
name=’my awesome component’, description=’Come, Let’s play’, base_image=’tensorflow/tensorflow:1.11.0-py3’,) def my_component(a: str, b: int) -> str:
…
kfp.dsl.types module¶
-
class
kfp.dsl.types.
BaseType
[source]¶ Bases:
object
MetaType is a base type for all scalar and artifact types.
-
class
kfp.dsl.types.
Bool
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'type': 'boolean'}¶
-
-
class
kfp.dsl.types.
Dict
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'type': 'object'}¶
-
-
class
kfp.dsl.types.
Float
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'type': 'number'}¶
-
-
class
kfp.dsl.types.
GCPProjectID
[source]¶ Bases:
kfp.dsl.types.BaseType
MetaGCPProjectID: GCP project id
-
openapi_schema_validator
= {'type': 'string'}¶
-
-
class
kfp.dsl.types.
GCPRegion
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'type': 'string'}¶
-
-
class
kfp.dsl.types.
GCRPath
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'pattern': '^.*gcr\\.io/.*$', 'type': 'string'}¶
-
-
class
kfp.dsl.types.
GCSPath
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'pattern': '^gs://.*$', 'type': 'string'}¶
-
-
exception
kfp.dsl.types.
InconsistentTypeException
[source]¶ Bases:
Exception
InconsistencyTypeException is raised when two types are not consistent
-
class
kfp.dsl.types.
Integer
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'type': 'integer'}¶
-
-
class
kfp.dsl.types.
List
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'type': 'array'}¶
-
-
class
kfp.dsl.types.
LocalPath
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'type': 'string'}¶
-
-
class
kfp.dsl.types.
String
[source]¶ Bases:
kfp.dsl.types.BaseType
-
openapi_schema_validator
= {'type': 'string'}¶
-
-
kfp.dsl.types.
check_types
(checked_type, expected_type)[source]¶ check_types checks the type consistency. For each of the attribute in checked_type, there is the same attribute in expected_type with the same value. However, expected_type could contain more attributes that checked_type does not contain. :param checked_type: it describes a type from the upstream component output :type checked_type: BaseType/str/dict :param expected_type: it describes a type from the downstream component input :type expected_type: BaseType/str/dict
kfp.Client class¶
-
class
kfp.
Client
(host=None, client_id=None)[source]¶ Bases:
object
API Client for KubeFlow Pipeline.
-
IN_CLUSTER_DNS_NAME
= 'ml-pipeline.kubeflow.svc.cluster.local:8888'¶
-
create_experiment
(name)[source]¶ Create a new experiment. :param name: the name of the experiment.
Returns: An Experiment object. Most important field is id.
-
get_experiment
(experiment_id=None, experiment_name=None)[source]¶ Get details of an experiment Either experiment_id or experiment_name is required :param experiment_id: id of the experiment. (Optional) :param experiment_name: name of the experiment. (Optional)
Returns: A response object including details of a experiment. - Throws:
- Exception if experiment is not found or None of the arguments is provided
-
get_run
(run_id)[source]¶ Get run details. :param id of the run.:
Returns: A response object including details of a run. - Throws:
- Exception if run is not found.
-
list_experiments
(page_token='', page_size=10, sort_by='')[source]¶ List experiments. :param page_token: token for starting of the page. :param page_size: size of the page. :param sort_by: can be ‘[field_name]’, ‘[field_name] des’. For example, ‘name des’.
Returns: A response object including a list of experiments and next page token.
-
list_runs
(page_token='', page_size=10, sort_by='', experiment_id=None)[source]¶ List runs. :param page_token: token for starting of the page. :param page_size: size of the page. :param sort_by: one of ‘field_name’, ‘field_name des’. For example, ‘name des’. :param experiment_id: experiment id to filter upon
Returns: A response object including a list of experiments and next page token.
-
run_pipeline
(experiment_id, job_name, pipeline_package_path=None, params={}, pipeline_id=None)[source]¶ Run a specified pipeline.
Parameters: - experiment_id – The string id of an experiment.
- job_name – name of the job.
- pipeline_package_path – local path of the pipeline package(the filename should end with one of the following .tar.gz, .tgz, .zip, .yaml, .yml).
- params – a dictionary with key (string) as param name and value (string) as as param value.
- pipeline_id – the string ID of a pipeline.
Returns: A run object. Most important field is id.
-
kfp.notebook package¶
KFP extension modules¶
kfp.onprem module¶
-
kfp.onprem.
mount_pvc
(pvc_name='pipeline-claim', volume_name='pipeline', volume_mount_path='/mnt/pipeline')[source]¶ Modifier function to apply to a Container Op to simplify volume, volume mount addition and enable better reuse of volumes, volume claims across container ops. Usage:
train = train_op(…) train.apply(mount_pvc(‘claim-name’, ‘pipeline’, ‘/mnt/pipeline’))
kfp.gcp module¶
-
kfp.gcp.
use_gcp_secret
(secret_name='user-gcp-sa', secret_file_path_in_volume='/user-gcp-sa.json', volume_name='gcp-credentials', secret_volume_mount_path='/secret/gcp-credentials')[source]¶ An operator that configures the container to use GCP service account.
The user-gcp-sa secret is created as part of the kubeflow deployment that stores the access token for kubeflow user service account.
With this service account, the container has a range of GCP APIs to access to. This service account is automatically created as part of the kubeflow deployment.
For the list of the GCP APIs this service account can access to, check https://github.com/kubeflow/kubeflow/blob/7b0db0d92d65c0746ac52b000cbc290dac7c62b1/deployment/gke/deployment_manager_configs/iam_bindings_template.yaml#L18
If you want to call the GCP APIs in a different project, grant the kf-user service account access permission.
-
kfp.gcp.
use_tpu
(tpu_cores: int, tpu_resource: str, tf_version: str)[source]¶ An operator that configures GCP TPU spec in a container op.
Parameters: - tpu_cores – Required. The number of cores of TPU resource. For example, the value can be ‘8’, ‘32’, ‘128’, etc. Check more details at: https://cloud.google.com/tpu/docs/kubernetes-engine-setup#pod-spec.
- tpu_resource – Required. The resource name of the TPU resource. For example, the value can be ‘v2’, ‘preemptible-v1’, ‘v3’ or ‘preemptible-v3’. Check more details at: https://cloud.google.com/tpu/docs/kubernetes-engine-setup#pod-spec.
- tf_version – Required. The TensorFlow version that the TPU nodes use. For example, the value can be ‘1.12’, ‘1.11’, ‘1.9’ or ‘1.8’. Check more details at: https://cloud.google.com/tpu/docs/supported-versions.
kfp.aws module¶
-
kfp.aws.
use_aws_secret
(secret_name='aws-secret', aws_access_key_id_name='AWS_ACCESS_KEY_ID', aws_secret_access_key_name='AWS_SECRET_ACCESS_KEY')[source]¶ An operator that configures the container to use AWS credentials.
AWS doesn’t create secret along with kubeflow deployment and it requires users to manually create credential secret with proper permissions. — apiVersion: v1 kind: Secret metadata:
name: aws-secrettype: Opaque data:
AWS_ACCESS_KEY_ID: BASE64_YOUR_AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY: BASE64_YOUR_AWS_SECRET_ACCESS_KEY
kfp.azure module¶
-
kfp.azure.
use_azure_secret
(secret_name='azcreds')[source]¶ An operator that configures the container to use Azure user credentials.
The azcreds secret is created as part of the kubeflow deployment that stores the client ID and secrets for the kubeflow azure service principal.
With this service principal, the container has a range of Azure APIs to access to.