bezier

Helper for Bézier Curves, Triangles, and Higher Order Objects

CircleCI Build AppVeyor CI Build Code Coverage

Zenodo DOI for ``bezier`` "Journal of Open Source Science" DOI for ``bezier``

bezier package

Helper for Bézier Curves, Triangles, and Higher Order Objects.

Intended to perform basic operations on Bézier objects such as intersections, length/area/etc. computations, subdivision, implicitization and other relevant information.

Plotting utilities are also provided.

bezier.get_include()

Get the directory with .h header files.

Extension modules (and Cython modules) that need to compile against bezier should use this function to locate the appropriate include directory.

For more information, see Native Libraries.

Returns:include directory that contains header files for the libbezier Fortran library.
Return type:str
bezier.get_lib()

Get the directory with .a / .lib static libraries.

Extension modules (and Cython modules) that need to compile against the libbezier static library should use this function to locate the appropriate lib directory.

For more information, see Native Libraries.

Returns:lib directory that contains static libraries for the libbezier Fortran library.
Return type:str

Submodules

bezier.curve module

Helper for Bézier Curves.

See Curve-Curve Intersection for examples using the Curve class to find intersections.

class bezier.curve.IntersectionStrategy

Enum determining if the type of intersection algorithm to use.

class bezier.curve.Curve(nodes, degree, start=0.0, end=1.0, root=None, _copy=True)

Bases: bezier._base.Base

Represents a Bézier curve.

We take the traditional definition: a Bézier curve is a mapping from \(s \in \left[0, 1\right]\) to convex combinations of points \(v_0, v_1, \ldots, v_n\) in some vector space:

\[B(s) = \sum_{j = 0}^n \binom{n}{j} s^j (1 - s)^{n - j} \cdot v_j\]
_images/curve_constructor.png
>>> import bezier
>>> nodes = np.asfortranarray([
...     [0.0  , 0.0],
...     [0.625, 0.5],
...     [1.0  , 0.5],
... ])
>>> curve = bezier.Curve(nodes, degree=2)
>>> curve
<Curve (degree=2, dimension=2)>
Parameters:
  • nodes (numpy.ndarray) – The nodes in the curve. The rows represent each node while the columns are the dimension of the ambient space.
  • degree (int) – The degree of the curve. This is assumed to correctly correspond to the number of nodes. Use from_nodes() if the degree has not yet been computed.
  • start (Optional [ float ]) – The beginning of the sub-interval that this curve represents.
  • end (Optional [ float ]) – The end of the sub-interval that this curve represents.
  • root (Optional [ Curve ]) – The root curve that contains this current curve.
  • _copy (bool) – Flag indicating if the nodes should be copied before being stored. Defaults to True since callers may freely mutate nodes after passing in.
degree

int – The degree of the current shape.

dimension

int – The dimension that the shape lives in.

For example, if the shape lives in \(\mathbf{R}^3\), then the dimension is 3.

edge_index

Optional [ int ] – The index of the edge among a group of edges.

>>> curve.edge_index
1
>>> curve.previous_edge
<Curve (degree=1, dimension=2)>
>>> curve.previous_edge.edge_index
0
>>> curve.next_edge
<Curve (degree=1, dimension=2)>
>>> curve.next_edge.edge_index
2

This is intended to be used when a Curve is created as part of a larger structure like a Surface or CurvedPolygon.

elevate()

Return a degree-elevated version of the current curve.

Does this by converting the current nodes \(v_0, \ldots, v_n\) to new nodes \(w_0, \ldots, w_{n + 1}\) where

\[\begin{split}\begin{align*} w_0 &= v_0 \\ w_j &= \frac{j}{n + 1} v_{j - 1} + \frac{n + 1 - j}{n + 1} v_j \\ w_{n + 1} &= v_n \end{align*}\end{split}\]
_images/curve_elevate.png
>>> nodes = np.asfortranarray([
...     [0.0, 0.0],
...     [1.5, 1.5],
...     [3.0, 0.0],
... ])
>>> curve = bezier.Curve(nodes, degree=2)
>>> elevated = curve.elevate()
>>> elevated
<Curve (degree=3, dimension=2)>
>>> elevated.nodes
array([[ 0., 0.],
       [ 1., 1.],
       [ 2., 1.],
       [ 3., 0.]])
Returns:The degree-elevated curve.
Return type:Curve
end

float – End of sub-interval this curve represents.

See start for more information.

evaluate(s)

Evaluate \(B(s)\) along the curve.

This method acts as a (partial) inverse to locate().

See evaluate_multi() for more details.

_images/curve_evaluate.png
>>> nodes = np.asfortranarray([
...     [0.0  , 0.0],
...     [0.625, 0.5],
...     [1.0  , 0.5],
... ])
>>> curve = bezier.Curve(nodes, degree=2)
>>> curve.evaluate(0.75)
array([[ 0.796875, 0.46875 ]])
Parameters:s (float) – Parameter along the curve.
Returns:The point on the curve (as a two dimensional NumPy array with a single row).
Return type:numpy.ndarray
evaluate_multi(s_vals)

Evaluate \(B(s)\) for multiple points along the curve.

This is done via a modified Horner’s method (vectorized for each s-value).

>>> nodes = np.asfortranarray([
...     [0.0, 0.0, 0.0],
...     [1.0, 2.0, 3.0],
... ])
>>> curve = bezier.Curve(nodes, degree=1)
>>> curve
<Curve (degree=1, dimension=3)>
>>> s_vals = np.linspace(0.0, 1.0, 5)
>>> curve.evaluate_multi(s_vals)
array([[ 0.  , 0.  , 0.  ],
       [ 0.25, 0.5 , 0.75],
       [ 0.5 , 1.  , 1.5 ],
       [ 0.75, 1.5 , 2.25],
       [ 1.  , 2.  , 3.  ]])
Parameters:s_vals (numpy.ndarray) – Parameters along the curve (as a 1D array).
Returns:The points on the curve. As a two dimensional NumPy array, with the rows corresponding to each s value and the columns to the dimension.
Return type:numpy.ndarray
classmethod from_nodes(nodes, start=0.0, end=1.0, root=None, _copy=True)

Create a Curve from nodes.

Computes the degree based on the shape of nodes.

Parameters:
  • nodes (numpy.ndarray) – The nodes in the curve. The rows represent each node while the columns are the dimension of the ambient space.
  • start (Optional [ float ]) – The beginning of the sub-interval that this curve represents.
  • end (Optional [ float ]) – The end of the sub-interval that this curve represents.
  • root (Optional [ Curve ]) – The root curve that contains this current curve.
  • _copy (bool) – Flag indicating if the nodes should be copied before being stored. Defaults to True since callers may freely mutate nodes after passing in.
Returns:

The constructed curve.

Return type:

Curve

intersect(other, strategy=<IntersectionStrategy.geometric: 'geometric'>, _verify=True)

Find the points of intersection with another curve.

See Curve-Curve Intersection for more details.

_images/curve_intersect.png
>>> nodes1 = np.asfortranarray([
...     [0.0  , 0.0  ],
...     [0.375, 0.75 ],
...     [0.75 , 0.375],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.5, 0.0 ],
...     [0.5, 0.75],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=1)
>>> intersections = curve1.intersect(curve2)
>>> intersections
array([[ 0.5, 0.5]])
Parameters:
  • other (Curve) – Other curve to intersect with.
  • strategy (Optional [ IntersectionStrategy ]) – The intersection algorithm to use. Defaults to geometric.
  • _verify (Optional [ bool ]) – Indicates if extra caution should be used to verify assumptions about the input and current curve. Can be disabled to speed up execution time. Defaults to True.
Returns:

Array of intersection points (possibly empty).

Return type:

numpy.ndarray

Raises:
  • TypeError – If other is not a curve (and _verify=True).
  • NotImplementedError – If at least one of the curves isn’t two-dimensional (and _verify=True).
length

float – The length of the current curve.

locate(point)

Find a point on the current curve.

Solves for \(s\) in \(B(s) = p\).

This method acts as a (partial) inverse to evaluate().

Note

A unique solution is only guaranteed if the current curve has no self-intersections. This code assumes, but doesn’t check, that this is true.

_images/curve_locate.png
>>> nodes = np.asfortranarray([
...     [0.0, 0.0],
...     [1.0, 2.0],
...     [3.0, 1.0],
...     [4.0, 0.0],
... ])
>>> curve = bezier.Curve(nodes, degree=3)
>>> point1 = np.asfortranarray([[3.09375, 0.703125]])
>>> s = curve.locate(point1)
>>> s
0.75
>>> point2 = np.asfortranarray([[2.0, 0.5]])
>>> curve.locate(point2) is None
True
Parameters:point (numpy.ndarray) – A (1xD) point on the curve, where \(D\) is the dimension of the curve.
Returns:The parameter value (\(s\)) corresponding to point or None if the point is not on the curve.
Return type:Optional [ float ]
Raises:ValueError – If the dimension of the point doesn’t match the dimension of the current curve.
next_edge

Optional [ Curve ] – An edge that comes after the current one.

This is intended to be used when a Curve is created as part of a larger structure like a Surface or CurvedPolygon.

nodes

numpy.ndarray – The nodes that define the current shape.

plot(num_pts, color=None, alpha=None, ax=None)

Plot the current curve.

Parameters:
Returns:

The axis containing the plot. This may be a newly created axis.

Return type:

matplotlib.artist.Artist

Raises:

NotImplementedError – If the curve’s dimension is not 2.

previous_edge

Optional [ Curve ] – An edge that comes before the current one.

This is intended to be used when a Curve is created as part of a larger structure like a Surface or CurvedPolygon.

reduce_()

Return a degree-reduced version of the current curve.

Does this by converting the current nodes \(v_0, \ldots, v_n\) to new nodes \(w_0, \ldots, w_{n - 1}\) that correspond to reversing the elevate() process.

This uses the pseudo-inverse of the elevation matrix. For example when elevating from degree 2 to 3, the matrix \(E_2\) is given by

\[\begin{split}\mathbf{v} = \left[\begin{array}{c} v_0 \\ v_1 \\ v_2 \end{array}\right] \longmapsto \left[\begin{array}{c} v_0 \\ \frac{v_0 + 2 v_1}{3} \\ \frac{2 v_1 + v_2}{3} \\ v_2 \end{array}\right] = \frac{1}{3} \left[\begin{array}{c c} 3 & 0 \\ 2 & 1 \\ 1 & 2 \\ 0 & 3 \end{array}\right] \mathbf{v}\end{split}\]

and the pseudo-inverse is given by

\[\begin{split}R_2 = \left(E_2^T E_2\right)^{-1} E_2^T = \frac{1}{20} \left[\begin{array}{c c c c} 19 & 3 & -3 & 1 \\ -5 & 15 & 15 & -5 \\ 1 & -3 & 3 & 19 \end{array}\right].\end{split}\]

Warning

Though degree-elevation preserves the start and end nodes, degree reduction has no such guarantee. Rather, the nodes produced are “best” in the least squares sense (when solving the normal equations).

_images/curve_reduce.png
>>> nodes = np.asfortranarray([
...     [-3.0, 3.0],
...     [ 0.0, 2.0],
...     [ 1.0, 3.0],
...     [ 0.0, 6.0],
... ])
>>> curve = bezier.Curve(nodes, degree=3)
>>> reduced = curve.reduce_()
>>> reduced
<Curve (degree=2, dimension=2)>
>>> reduced.nodes
array([[-3. , 3. ],
       [ 1.5, 1.5],
       [ 0. , 6. ]])

In the case that the current curve is not degree-elevated.

_images/curve_reduce_approx.png
>>> nodes = np.asfortranarray([
...     [0.0 , 2.5],
...     [1.25, 5.0],
...     [3.75, 7.5],
...     [5.0 , 2.5],
... ])
>>> curve = bezier.Curve(nodes, degree=3)
>>> reduced = curve.reduce_()
>>> reduced
<Curve (degree=2, dimension=2)>
>>> reduced.nodes
array([[-0.125, 2.125],
       [ 2.5  , 8.125],
       [ 5.125, 2.875]])
Returns:The degree-reduced curve.
Return type:Curve
root

Curve – The “root” curve that contains the current curve.

This indicates that the current curve is a section of the “root” curve. For example:

>>> _, right = curve.subdivide()
>>> right
<Curve (degree=2, dimension=2, start=0.5, end=1)>
>>> right.root is curve
True
>>> right.evaluate(0.0) == curve.evaluate(0.5)
array([[ True, True]], dtype=bool)
>>>
>>> mid_left, _ = right.subdivide()
>>> mid_left
<Curve (degree=2, dimension=2, start=0.5, end=0.75)>
>>> mid_left.root is curve
True
>>> mid_left.evaluate(1.0) == curve.evaluate(0.75)
array([[ True, True]], dtype=bool)
specialize(start, end)

Specialize the curve to a given sub-interval.

_images/curve_specialize.png
>>> nodes = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve = bezier.Curve(nodes, degree=2)
>>> new_curve = curve.specialize(-0.25, 0.75)
>>> new_curve
<Curve (degree=2, dimension=2, start=-0.25, end=0.75)>
>>> new_curve.nodes
array([[-0.25 , -0.625],
       [ 0.25 ,  0.875],
       [ 0.75 ,  0.375]])

This is generalized version of subdivide(), and can even match the output of that method:

>>> left, right = curve.subdivide()
>>> also_left = curve.specialize(0.0, 0.5)
>>> np.all(also_left.nodes == left.nodes)
True
>>> also_right = curve.specialize(0.5, 1.0)
>>> np.all(also_right.nodes == right.nodes)
True
Parameters:
  • start (float) – The start point of the interval we are specializing to.
  • end (float) – The end point of the interval we are specializing to.
Returns:

The newly-specialized curve.

Return type:

Curve

start

float – Start of sub-interval this curve represents.

This value is used to track the current curve in the re-parameterization / subdivision process. The curve is still defined on the unit interval, but this value illustrates how this curve relates to a “parent” curve. For example:

>>> nodes = np.asfortranarray([
...     [0.0, 0.0],
...     [1.0, 2.0],
... ])
>>> curve = bezier.Curve(nodes, degree=1)
>>> curve
<Curve (degree=1, dimension=2)>
>>> left, right = curve.subdivide()
>>> left
<Curve (degree=1, dimension=2, start=0, end=0.5)>
>>> right
<Curve (degree=1, dimension=2, start=0.5, end=1)>
>>> _, mid_right = left.subdivide()
>>> mid_right
<Curve (degree=1, dimension=2, start=0.25, end=0.5)>
>>> mid_right.nodes
array([[ 0.25, 0.5 ],
       [ 0.5 , 1.  ]])
subdivide()

Split the curve \(B(s)\) into a left and right half.

Takes the interval \(\left[0, 1\right]\) and splits the curve into \(B_1 = B\left(\left[0, \frac{1}{2}\right]\right)\) and \(B_2 = B\left(\left[\frac{1}{2}, 1\right]\right)\). In order to do this, also reparameterizes the curve, hence the resulting left and right halves have new nodes.

_images/curve_subdivide.png
>>> nodes = np.asfortranarray([
...     [0.0 , 0.0],
...     [1.25, 3.0],
...     [2.0 , 1.0],
... ])
>>> curve = bezier.Curve(nodes, degree=2)
>>> left, right = curve.subdivide()
>>> left
<Curve (degree=2, dimension=2, start=0, end=0.5)>
>>> left.nodes
array([[ 0.   , 0.   ],
       [ 0.625, 1.5  ],
       [ 1.125, 1.75 ]])
>>> right
<Curve (degree=2, dimension=2, start=0.5, end=1)>
>>> right.nodes
array([[ 1.125, 1.75 ],
       [ 1.625, 2.   ],
       [ 2.   , 1.   ]])
Returns:The left and right sub-curves.
Return type:Tuple [ Curve, Curve ]

bezier.curved_polygon module

Curved polygon and associated helpers.

A curved polygon (in \(\mathbf{R}^2\)) is defined by the collection of Bézier curves that determine the boundary.

class bezier.curved_polygon.CurvedPolygon(*edges, **kwargs)

Bases: object

Represents an object defined by its curved boundary.

The boundary is a piecewise defined collection of Bézier curves.

Note

The direction of the nodes in each Curve on the boundary is important. When verifying, we check that one curve begins where the last one ended.

_images/curved_polygon_constructor1.png
>>> import bezier
>>> nodes0 = np.asfortranarray([
...     [0.0,  0.0],
...     [1.0, -1.0],
...     [2.0,  0.0],
... ])
>>> edge0 = bezier.Curve(nodes0, degree=2)
>>> nodes1 = np.asfortranarray([
...     [2.0, 0.0],
...     [2.0, 1.0],
... ])
>>> edge1 = bezier.Curve(nodes1, degree=1)
>>> nodes2 = np.asfortranarray([
...     [2.0, 1.0],
...     [1.0, 2.0],
...     [0.0, 1.0],
... ])
>>> edge2 = bezier.Curve(nodes2, degree=2)
>>> nodes3 = np.asfortranarray([
...     [0.0, 1.0],
...     [0.0, 0.0],
... ])
>>> edge3 = bezier.Curve(nodes3, degree=1)
>>> curved_poly = bezier.CurvedPolygon(
...     edge0, edge1, edge2, edge3)
>>> curved_poly
<CurvedPolygon (num_sides=4)>

Though the endpoints of each pair of edges are verified to match, the curved polygon as a whole is not verified, so creating a curved polygon with self-intersections is possible:

_images/curved_polygon_constructor2.png
>>> nodes0 = np.asfortranarray([
...     [0.0, 0.0],
...     [1.0, 0.0],
... ])
>>> edge0 = bezier.Curve(nodes0, degree=1)
>>> nodes1 = np.asfortranarray([
...     [1.0 , 0.0],
...     [1.25, 0.5],
...     [1.0 , 1.0],
... ])
>>> edge1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [1.0, 1.0],
...     [2.0, 1.0],
... ])
>>> edge2 = bezier.Curve(nodes2, degree=1)
>>> nodes3 = np.asfortranarray([
...     [2.0, 1.0 ],
...     [1.0, 0.75],
...     [0.0, 0.0 ],
... ])
>>> edge3 = bezier.Curve(nodes3, degree=2)
>>> curved_poly = bezier.CurvedPolygon(
...     edge0, edge1, edge2, edge3)
>>> curved_poly
<CurvedPolygon (num_sides=4)>
Parameters:
  • edges (Tuple [ Curve, ... ]) – The boundary edges of the curved polygon.
  • kwargs – The only keyword argument accepted is _verify, which is a bool indicating if the edges should be verified as having shared endpoints. It defaults to True.
num_sides

int – The number of sides in the current polygon.

plot(pts_per_edge, color=None, ax=None)

Plot the current curved polygon.

Parameters:
Returns:

The axis containing the plot. This may be a newly created axis.

Return type:

matplotlib.artist.Artist

bezier.surface module

Helper for Bézier Surfaces / Triangles.

class bezier.surface.Surface(nodes, degree, base_x=0.0, base_y=0.0, width=1.0, _copy=True)

Bases: bezier._base.Base

Represents a Bézier surface.

We define a Bézier triangle as a mapping from the unit simplex in 2D (i.e. the unit triangle) onto a surface in an arbitrary dimension. We use barycentric coordinates

\[\lambda_1 = 1 - s - t, \lambda_2 = s, \lambda_3 = t\]

for points in the unit triangle \(\left\{(s, t) \mid 0 \leq s, t, s + t \leq 1\right\}\):

_images/unit_triangle.png

As with curves, using these weights we get convex combinations of points \(v_{i, j, k}\) in some vector space:

\[B\left(\lambda_1, \lambda_2, \lambda_3\right) = \sum_{i + j + k = d} \binom{d}{i \, j \, k} \lambda_1^i \lambda_2^j \lambda_3^k \cdot v_{i, j, k}\]

Note

We assume the nodes are ordered from left-to-right and from bottom-to-top. So for example, the linear triangle:

(0,0,1)

(1,0,0)  (0,1,0)

is ordered as

\[\left[\begin{array}{c c c} v_{1,0,0} & v_{0,1,0} & v_{0,0,1} \end{array}\right]^T\]

the quadratic triangle:

(0,0,2)

(1,0,1)  (0,1,1)

(2,0,0)  (1,1,0)  (0,2,0)

is ordered as

\[\left[\begin{array}{c c c c c c} v_{2,0,0} & v_{1,1,0} & v_{0,2,0} & v_{1,0,1} & v_{0,1,1} & v_{0,0,2} \end{array}\right]^T\]

the cubic triangle:

(0,0,3)

(1,0,2)  (0,1,2)

(2,0,1)  (1,1,1)  (0,2,1)

(3,0,0)  (2,1,0)  (1,2,0)  (0,3,0)

is ordered as

\[\left[\begin{array}{c c c c c c c c c c} v_{3,0,0} & v_{2,1,0} & v_{1,2,0} & v_{0,3,0} & v_{2,0,1} & v_{1,1,1} & v_{0,2,1} & v_{1,0,2} & v_{0,1,2} & v_{0,0,3} \end{array}\right]^T\]

and so on.

The index formula

\[j + \frac{k}{2} \left(2 (i + j) + k + 3\right)\]

can be used to map a triple \((i, j, k)\) onto the corresponding linear index, but it is not particularly insightful or useful.

_images/surface_constructor.png
>>> import bezier
>>> nodes = np.asfortranarray([
...     [0.0  , 0.0  ],
...     [0.5  , 0.0  ],
...     [1.0  , 0.25 ],
...     [0.125, 0.5  ],
...     [0.375, 0.375],
...     [0.25 , 1.0  ],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> surface
<Surface (degree=2, dimension=2)>
Parameters:
  • nodes (numpy.ndarray) – The nodes in the surface. The rows represent each node while the columns are the dimension of the ambient space.
  • degree (int) – The degree of the surface. This is assumed to correctly correspond to the number of nodes. Use from_nodes() if the degree has not yet been computed.
  • base_x (Optional [ float ]) – The \(x\)-coordinate of the base vertex of the sub-triangle that this surface represents. See width() for more info.
  • base_y (Optional [ float ]) – The \(y\)-coordinate of the base vertex of the sub-triangle that this surface represents. See width() for more info.
  • width (Optional [ float ]) – The width of the sub-triangle that this surface represents. See width() for more info.
  • _copy (bool) – Flag indicating if the nodes should be copied before being stored. Defaults to True since callers may freely mutate nodes after passing in.
area

float – The area of the current surface.

Raises:NotImplementedError – If the area isn’t already cached.
base_x

float – The x-coordinate of the base vertex.

See width() for more detail.

base_y

float – The y-coordinate of the base vertex.

See width() for more detail.

degree

int – The degree of the current shape.

dimension

int – The dimension that the shape lives in.

For example, if the shape lives in \(\mathbf{R}^3\), then the dimension is 3.

edges

The edges of the surface.

>>> nodes = np.asfortranarray([
...     [0.0   ,  0.0   ],
...     [0.5   , -0.1875],
...     [1.0   ,  0.0   ],
...     [0.1875,  0.5   ],
...     [0.625 ,  0.625 ],
...     [0.0   ,  1.0   ],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> edge1, _, _ = surface.edges
>>> edge1
<Curve (degree=2, dimension=2)>
>>> edge1.nodes
array([[ 0.  ,  0.    ],
       [ 0.5 , -0.1875],
       [ 1.  ,  0.    ]])
Returns:The edges of the surface.
Return type:Tuple [ Curve, Curve, Curve ]
elevate()

Return a degree-elevated version of the current surface.

Does this by converting the current nodes \(\left\{v_{i, j, k}\right\}_{i + j + k = d}\) to new nodes \(\left\{w_{i, j, k}\right\}_{i + j + k = d + 1}\). Does so by re-writing

\[E\left(\lambda_1, \lambda_2, \lambda_3\right) = \left(\lambda_1 + \lambda_2 + \lambda_3\right) B\left(\lambda_1, \lambda_2, \lambda_3\right) = \sum_{i + j + k = d + 1} \binom{d + 1}{i \, j \, k} \lambda_1^i \lambda_2^j \lambda_3^k \cdot w_{i, j, k}\]

In this form, we must have

\[\begin{split}\begin{align*} \binom{d + 1}{i \, j \, k} \cdot w_{i, j, k} &= \binom{d}{i - 1 \, j \, k} \cdot v_{i - 1, j, k} + \binom{d}{i \, j - 1 \, k} \cdot v_{i, j - 1, k} + \binom{d}{i \, j \, k - 1} \cdot v_{i, j, k - 1} \\ \Longleftrightarrow (d + 1) \cdot w_{i, j, k} &= i \cdot v_{i - 1, j, k} + j \cdot v_{i, j - 1, k} + k \cdot v_{i, j, k - 1} \end{align*}\end{split}\]

where we define, for example, \(v_{i, j, k - 1} = 0\) if \(k = 0\).

_images/surface_elevate.png
>>> nodes = np.asfortranarray([
...     [0.0 , 0.0 ],
...     [1.5 , 0.0 ],
...     [3.0 , 0.0 ],
...     [0.75, 1.5 ],
...     [2.25, 2.25],
...     [0.0 , 3.0 ],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> elevated = surface.elevate()
>>> elevated
<Surface (degree=3, dimension=2)>
>>> elevated.nodes
array([[ 0.  , 0.  ],
       [ 1.  , 0.  ],
       [ 2.  , 0.  ],
       [ 3.  , 0.  ],
       [ 0.5 , 1.  ],
       [ 1.5 , 1.25],
       [ 2.5 , 1.5 ],
       [ 0.5 , 2.  ],
       [ 1.5 , 2.5 ],
       [ 0.  , 3.  ]])
Returns:The degree-elevated surface.
Return type:Surface
evaluate_barycentric(lambda1, lambda2, lambda3, _verify=True)

Compute a point on the surface.

Evaluates \(B\left(\lambda_1, \lambda_2, \lambda_3\right)\).

_images/surface_evaluate_barycentric.png
>>> nodes = np.asfortranarray([
...     [0.0  , 0.0  ],
...     [0.5  , 0.0  ],
...     [1.0  , 0.25 ],
...     [0.125, 0.5  ],
...     [0.375, 0.375],
...     [0.25 , 1.0  ],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> point = surface.evaluate_barycentric(0.125, 0.125, 0.75)
>>> point
array([[ 0.265625 , 0.73046875]])

However, this can’t be used for points outside the reference triangle:

>>> surface.evaluate_barycentric(-0.25, 0.75, 0.5)
Traceback (most recent call last):
  ...
ValueError: ('Parameters must be positive', -0.25, 0.75, 0.5)

or for non-Barycentric coordinates;

>>> surface.evaluate_barycentric(0.25, 0.25, 0.25)
Traceback (most recent call last):
  ...
ValueError: ('Values do not sum to 1', 0.25, 0.25, 0.25)

However, these “invalid” inputs can be used if _verify is False.

>>> surface.evaluate_barycentric(-0.25, 0.75, 0.5, _verify=False)
array([[ 0.6875 , 0.546875]])
>>> surface.evaluate_barycentric(0.25, 0.25, 0.25, _verify=False)
array([[ 0.203125, 0.1875 ]])
Parameters:
  • lambda1 (float) – Parameter along the reference triangle.
  • lambda2 (float) – Parameter along the reference triangle.
  • lambda3 (float) – Parameter along the reference triangle.
  • _verify (Optional [ bool ]) – Indicates if the barycentric coordinates should be verified as summing to one and all non-negative (i.e. verified as barycentric). Can either be used to evaluate at points outside the domain, or to save time when the caller already knows the input is verified. Defaults to True.
Returns:

The point on the surface (as a two dimensional NumPy array with a single row).

Return type:

numpy.ndarray

Raises:
  • ValueError – If the weights are not valid barycentric coordinates, i.e. they don’t sum to 1. (Won’t raise if _verify=False.)
  • ValueError – If some weights are negative. (Won’t raise if _verify=False.)
evaluate_barycentric_multi(param_vals, _verify=True)

Compute multiple points on the surface.

Assumes param_vals has three columns of Barycentric coordinates. See evaluate_barycentric() for more details on how each row of parameter values is evaluated.

_images/surface_evaluate_barycentric_multi.png
>>> nodes = np.asfortranarray([
...     [ 0. , 0.  ],
...     [ 1. , 0.75],
...     [ 2. , 1.  ],
...     [-1.5, 1.  ],
...     [-0.5, 1.5 ],
...     [-3. , 2.  ],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> surface
<Surface (degree=2, dimension=2)>
>>> param_vals = np.asfortranarray([
...     [0.   , 0.25, 0.75 ],
...     [1.   , 0.  , 0.   ],
...     [0.25 , 0.5 , 0.25 ],
...     [0.375, 0.25, 0.375],
... ])
>>> points = surface.evaluate_barycentric_multi(param_vals)
>>> points
array([[-1.75  , 1.75    ],
       [ 0.    , 0.      ],
       [ 0.25  , 1.0625  ],
       [-0.625 , 1.046875]])

Note

There is also a Fortran implementation of this function, which will be used if it can be built.

Parameters:
  • param_vals (numpy.ndarray) – Array of parameter values (as a Nx3 array).
  • _verify (Optional [ bool ]) – Indicates if the coordinates should be verified. See evaluate_barycentric(). Defaults to True. Will also double check that param_vals is the right shape.
Returns:

The points on the surface.

Return type:

numpy.ndarray

Raises:

ValueError – If param_vals is not a 2D array and _verify=True.

evaluate_cartesian(s, t, _verify=True)

Compute a point on the surface.

Evaluates \(B\left(1 - s - t, s, t\right)\) by calling evaluate_barycentric():

This method acts as a (partial) inverse to locate().

>>> nodes = np.asfortranarray([
...     [0.0 , 0.0  ],
...     [0.5 , 0.5  ],
...     [1.0 , 0.625],
...     [0.0 , 0.5  ],
...     [0.5 , 0.5  ],
...     [0.25, 1.0  ],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> point = surface.evaluate_cartesian(0.125, 0.375)
>>> point
array([[ 0.16015625, 0.44726562]])
>>> surface.evaluate_barycentric(0.5, 0.125, 0.375)
array([[ 0.16015625, 0.44726562]])
Parameters:
  • s (float) – Parameter along the reference triangle.
  • t (float) – Parameter along the reference triangle.
  • _verify (Optional [ bool ]) – Indicates if the coordinates should be verified inside of the reference triangle. Defaults to True.
Returns:

The point on the surface (as a two dimensional NumPy array).

Return type:

numpy.ndarray

evaluate_cartesian_multi(param_vals, _verify=True)

Compute multiple points on the surface.

Assumes param_vals has two columns of Cartesian coordinates. See evaluate_cartesian() for more details on how each row of parameter values is evaluated.

_images/surface_evaluate_cartesian_multi.png
>>> nodes = np.asfortranarray([
...     [ 0.0, 0.0],
...     [ 2.0, 1.0],
...     [-3.0, 2.0],
... ])
>>> surface = bezier.Surface(nodes, degree=1)
>>> surface
<Surface (degree=1, dimension=2)>
>>> param_vals = np.asfortranarray([
...     [0.0  , 0.0  ],
...     [0.125, 0.625],
...     [0.5  , 0.5  ],
... ])
>>> points = surface.evaluate_cartesian_multi(param_vals)
>>> points
array([[ 0.   , 0.   ],
       [-1.625, 1.375],
       [-0.5  , 1.5  ]])

Note

There is also a Fortran implementation of this function, which will be used if it can be built.

Parameters:
  • param_vals (numpy.ndarray) – Array of parameter values (as a Nx2 array).
  • _verify (Optional [ bool ]) – Indicates if the coordinates should be verified. See evaluate_cartesian(). Defaults to True. Will also double check that param_vals is the right shape.
Returns:

The points on the surface.

Return type:

numpy.ndarray

Raises:

ValueError – If param_vals is not a 2D array and _verify=True.

classmethod from_nodes(nodes, base_x=0.0, base_y=0.0, width=1.0, _copy=True)

Create a Surface from nodes.

Computes the degree based on the shape of nodes.

Parameters:
  • nodes (numpy.ndarray) – The nodes in the surface. The rows represent each node while the columns are the dimension of the ambient space.
  • base_x (Optional [ float ]) – The \(x\)-coordinate of the base vertex of the sub-triangle that this surface represents.
  • base_y (Optional [ float ]) – The \(y\)-coordinate of the base vertex of the sub-triangle that this surface represents.
  • width (Optional [ float ]) – The width of the sub-triangle that this surface represents.
  • _copy (bool) – Flag indicating if the nodes should be copied before being stored. Defaults to True since callers may freely mutate nodes after passing in.
Returns:

The constructed surface.

Return type:

Surface

intersect(other, strategy=<IntersectionStrategy.geometric: 'geometric'>, _verify=True)

Find the common intersection with another surface.

Parameters:
  • other (Surface) – Other surface to intersect with.
  • strategy (Optional [ IntersectionStrategy ]) – The intersection algorithm to use. Defaults to geometric.
  • _verify (Optional [ bool ]) – Indicates if extra caution should be used to verify assumptions about the algorithm as it proceeds. Can be disabled to speed up execution time. Defaults to True.
Returns:

List of intersections (possibly empty).

Return type:

List [ CurvedPolygon ]

Raises:
is_valid

bool – Flag indicating if the surface is “valid”.

Here, “valid” means there are no self-intersections or singularities.

This checks if the Jacobian of the map from the reference triangle is nonzero. For example, a linear “surface” with collinear points is invalid:

_images/surface_is_valid1.png
>>> nodes = np.asfortranarray([
...     [0.0, 0.0],
...     [1.0, 1.0],
...     [2.0, 2.0],
... ])
>>> surface = bezier.Surface(nodes, degree=1)
>>> surface.is_valid
False

while a quadratic surface with one straight side:

_images/surface_is_valid2.png
>>> nodes = np.asfortranarray([
...     [ 0.0  , 0.0  ],
...     [ 0.5  , 0.125],
...     [ 1.0  , 0.0  ],
...     [-0.125, 0.5  ],
...     [ 0.5  , 0.5  ],
...     [ 0.0  , 1.0  ],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> surface.is_valid
True

though not all higher degree surfaces are valid:

_images/surface_is_valid3.png
>>> nodes = np.asfortranarray([
...     [1.0, 0.0],
...     [0.0, 0.0],
...     [1.0, 1.0],
...     [0.0, 0.0],
...     [0.0, 0.0],
...     [0.0, 1.0],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> surface.is_valid
False
locate(point, _verify=True)

Find a point on the current surface.

Solves for \(s\) and \(t\) in \(B(s, t) = p\).

This method acts as a (partial) inverse to evaluate_cartesian().

Note

A unique solution is only guaranteed if the current surface is valid. This code assumes a valid surface, but doesn’t check.

_images/surface_locate.png
>>> nodes = np.asfortranarray([
...     [0.0 ,  0.0 ],
...     [0.5 , -0.25],
...     [1.0 ,  0.0 ],
...     [0.25,  0.5 ],
...     [0.75,  0.75],
...     [0.0 ,  1.0 ],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> point = np.asfortranarray([[0.59375, 0.25]])
>>> s, t = surface.locate(point)
>>> s
0.5
>>> t
0.25
Parameters:
  • point (numpy.ndarray) – A (1xD) point on the surface, where \(D\) is the dimension of the surface.
  • _verify (Optional [ bool ]) – Indicates if extra caution should be used to verify assumptions about the inputs. Can be disabled to speed up execution time. Defaults to True.
Returns:

The \(s\) and \(t\) values corresponding to point or None if the point is not on the surface.

Return type:

Optional [ Tuple [ float, float ] ]

Raises:
  • NotImplementedError – If the surface isn’t in \(\mathbf{R}^2\).
  • ValueError – If the dimension of the point doesn’t match the dimension of the current surface.
nodes

numpy.ndarray – The nodes that define the current shape.

plot(pts_per_edge, color=None, ax=None, with_nodes=False)

Plot the current surface.

Parameters:
Returns:

The axis containing the plot. This may be a newly created axis.

Return type:

matplotlib.artist.Artist

Raises:

NotImplementedError – If the surface’s dimension is not 2.

subdivide()

Split the surface into four sub-surfaces.

Does so by taking the unit triangle (i.e. the domain of the surface) and splitting it into four sub-triangles

_images/surface_subdivide1.png

Then the surface is re-parameterized via the map to / from the given sub-triangles and the unit triangle.

For example, when a degree two surface is subdivided:

_images/surface_subdivide2.png
>>> nodes = np.asfortranarray([
...     [-1.0 , 0.0 ],
...     [ 0.5 , 0.5 ],
...     [ 2.0 , 0.0 ],
...     [ 0.25, 1.75],
...     [ 2.0 , 3.0 ],
...     [ 0.0 , 4.0 ],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> _, sub_surface_b, _, _ = surface.subdivide()
>>> sub_surface_b
<Surface (degree=2, dimension=2, base=(0.5, 0.5), width=-0.5)>
>>> sub_surface_b.nodes
array([[ 1.5   , 2.5   ],
       [ 0.6875, 2.3125],
       [-0.125 , 1.875 ],
       [ 1.1875, 1.3125],
       [ 0.4375, 1.3125],
       [ 0.5   , 0.25  ]])
Returns:The lower left, central, lower right and upper left sub-surfaces (in that order).
Return type:Tuple [ Surface, Surface, Surface, Surface ]
width

float – The “width” of the parameterized triangle.

When re-parameterizing (e.g. via subdivide()) we specialize the surface from the unit triangle to some sub-triangle. After doing this, we re-parameterize so that that sub-triangle is treated like the unit triangle.

To track which sub-triangle we are in during the subdivision process, we use the coordinates of the base vertex as well as the “width” of each leg.

_images/surface_width1.png
>>> surface.base_x, surface.base_y
(0.0, 0.0)
>>> surface.width
1.0

Upon subdivision, the width halves (and potentially changes sign) and the vertex moves to one of four points:

_images/surface_width2.png
>>> _, sub_surface_b, sub_surface_c, _ = surface.subdivide()
>>> sub_surface_b.base_x, sub_surface_b.base_y
(0.5, 0.5)
>>> sub_surface_b.width
-0.5
>>> sub_surface_c.base_x, sub_surface_c.base_y
(0.5, 0.0)
>>> sub_surface_c.width
0.5

Curve-Curve Intersection

The problem of intersecting two curves is a difficult one in computational geometry. The Curve.intersect() method uses a combination of curve subdivision, bounding box intersection, and curve approximation (by lines) to find intersections.

Curve-Line Intersection

>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0, 0.375],
...     [1.0, 0.375],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=1)
>>> curve1.intersect(curve2)
array([[ 0.25 , 0.375],
       [ 0.75 , 0.375]])
_images/curves1_and_8.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.5, 0.0 ],
...     [0.5, 0.75],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=1)
>>> curve1.intersect(curve2)
array([[ 0.5, 0.5]])
_images/curves1_and_9.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [4.5, 9.0],
...     [9.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0, 8.0],
...     [6.0, 0.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=1)
>>> curve1.intersect(curve2)
array([[ 3., 4.]])
_images/curves10_and_11.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.375],
...     [1.0, 0.375],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=1)
>>> nodes2 = np.asfortranarray([
...     [0.5, 0.0 ],
...     [0.5, 0.75],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=1)
>>> curve1.intersect(curve2)
array([[ 0.5 , 0.375]])
_images/curves8_and_9.png
>>> nodes1 = np.asfortranarray([
...     [-1.0, 1.0],
...     [ 0.5, 0.5],
...     [ 0.0, 2.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [ 0.5 , 0.5 ],
...     [-0.25, 1.25],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=1)
>>> curve1.intersect(curve2)
array([[ 0., 1.]])
_images/curves29_and_30.png

Curved Intersections

For curves which intersect at exact floating point numbers, we can typically compute the intersection with zero error:

>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0,  0.75],
...     [0.5, -0.25],
...     [1.0,  0.75],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
array([[ 0.25 , 0.375],
       [ 0.75 , 0.375]])
_images/curves1_and_5.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [1.5, 3.0],
...     [3.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [ 3.0  ,  1.5    ],
...     [ 2.625, -0.90625],
...     [-0.75 ,  2.4375 ],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
array([[ 0.75  , 1.125  ],
       [ 2.625 , 0.65625]])
_images/curves3_and_4.png
>>> nodes1 = np.asfortranarray([
...     [0.0  , 0.0  ],
...     [0.375, 0.75 ],
...     [0.75 , 0.375],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.25 , 0.5625],
...     [0.625, 0.1875],
...     [1.0  , 0.9375],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
array([[ 0.375 , 0.46875],
       [ 0.625 , 0.46875]])
_images/curves14_and_16.png

Even for curves which don’t intersect at exact floating point numbers, we can compute the intersection to machine precision:

>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [1.125,  0.5],
...     [0.625, -0.5],
...     [0.125,  0.5],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> intersections = curve1.intersect(curve2)
>>> sq31 = np.sqrt(31.0)
>>> expected = np.asfortranarray([
...     [36 - 4 * sq31, 16 + sq31],
...     [36 + 4 * sq31, 16 - sq31],
... ]) / 64.0
>>> max_err = np.max(np.abs(intersections - expected))
>>> binary_exponent(max_err)
-54
_images/curves1_and_2.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0, 0.265625],
...     [0.5, 0.234375],
...     [1.0, 0.265625],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> intersections = curve1.intersect(curve2)
>>> sq33 = np.sqrt(33.0)
>>> expected = np.asfortranarray([
...     [33 - 4 * sq33, 17],
...     [33 + 4 * sq33, 17],
... ]) / 66.0
>>> max_err = np.max(np.abs(intersections - expected))
>>> binary_exponent(max_err)
-54
_images/curves1_and_7.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0 ,  0.0],
...     [0.25,  2.0],
...     [0.5 , -2.0],
...     [0.75,  2.0],
...     [1.0 ,  0.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=4)
>>> intersections = curve1.intersect(curve2)
>>> sq7 = np.sqrt(7.0)
>>> expected = np.asfortranarray([
...     [7 - sq7, 6],
...     [7 + sq7, 6],
...     [      0, 0],
...     [     14, 0],
... ]) / 14.0
>>> max_err = np.max(np.abs(intersections - expected))
>>> binary_exponent(max_err)
-54
_images/curves1_and_13.png
>>> nodes1 = np.asfortranarray([
...     [-0.125, -0.28125],
...     [ 0.5  ,  1.28125],
...     [ 1.125, -0.28125],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [ 1.5625, -0.0625],
...     [-1.5625,  0.25  ],
...     [ 1.5625,  0.5625],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> intersections = curve1.intersect(curve2)
>>> sq5 = np.sqrt(5.0)
>>> expected = np.asfortranarray([
...     [6 - 2 * sq5, 5 - sq5],
...     [          4, 6      ],
...     [         16, 0      ],
...     [6 + 2 * sq5, 5 + sq5],
... ]) / 16.0
>>> max_err = np.max(np.abs(intersections - expected))
>>> binary_exponent(max_err)
-51
_images/curves21_and_22.png

For higher degree intersections, the error starts to get a little larger.

>>> nodes1 = np.asfortranarray([
...     [0.25 , 0.625],
...     [0.625, 0.25 ],
...     [1.0  , 1.0  ],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0 , 0.5],
...     [0.25, 1.0],
...     [0.75, 1.5],
...     [1.0 , 0.5],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=3)
>>> intersections = curve1.intersect(curve2)
>>> s_vals = np.roots([486, -3726, 13905, -18405, 6213, 1231])
>>> _, s_val, _ = np.sort(s_vals[s_vals.imag == 0].real)
>>> x_val = (3 * s_val + 1) / 4
>>> y_val = (9 * s_val * s_val - 6 * s_val + 5) / 8
>>> expected = np.asfortranarray([
...     [x_val, y_val],
... ])
>>> max_err = np.max(np.abs(intersections - expected))
>>> binary_exponent(max_err)
-50
_images/curves15_and_25.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 8.0],
...     [6.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=1)
>>> nodes2 = np.asfortranarray([
...     [0.375, 7.0],
...     [2.125, 8.0],
...     [3.875, 0.0],
...     [5.625, 1.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=3)
>>> intersections = curve1.intersect(curve2)
>>> sq7 = np.sqrt(7.0)
>>> expected = np.asfortranarray([
...     [           72, 96           ],
...     [72 - 21 * sq7, 96 + 28 * sq7],
...     [72 + 21 * sq7, 96 - 28 * sq7],
... ]) / 24.0
>>> max_err = np.max(np.abs(intersections - expected))
>>> binary_exponent(max_err)
-50
_images/curves11_and_26.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.375],
...     [1.0, 0.375],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=1)
>>> nodes2 = np.asfortranarray([
...     [0.125, 0.25  ],
...     [0.375, 0.75  ],
...     [0.625, 0.0   ],
...     [0.875, 0.1875],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=3)
>>> intersections = curve1.intersect(curve2)
>>> s_val1, s_val2, _ = np.sort(np.roots(
...     [17920, -29760, 13512, -1691]))
>>> expected = np.asfortranarray([
...     [s_val2, 0.375],
...     [s_val1, 0.375],
... ])
>>> max_err = np.max(np.abs(intersections - expected))
>>> binary_exponent(max_err)
-51
_images/curves8_and_27.png

Intersections at Endpoints

>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [1.0,  0.0],
...     [1.5, -1.0],
...     [2.0,  0.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
array([[ 1., 0.]])
_images/curves1_and_18.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [2.0, 0.0],
...     [1.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
array([[ 1., 0.]])
_images/curves1_and_19.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [4.5, 9.0],
...     [9.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [11.0,  8.0],
...     [ 7.0, 10.0],
...     [ 3.0,  4.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
array([[ 3., 4.]])
_images/curves10_and_17.png

Detecting Self-Intersections

>>> nodes1 = np.asfortranarray([
...     [ 0.0 , 2.0  ],
...     [-1.0 , 0.0  ],
...     [ 1.0 , 1.0  ],
...     [-0.75, 1.625],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=3)
>>> left, right = curve1.subdivide()
>>> left.intersect(right)
array([[-0.09375 , 0.828125],
       [-0.25    , 1.375   ]])
_images/curves42_and_43.png

Limitations

Intersections that occur at points of tangency are in general problematic. For example, consider

\[\begin{split}B_1(s) = \left[ \begin{array}{c} s \\ 2s(1 - s)\end{array}\right], \quad B_2(t) = \left[ \begin{array}{c} t \\ t^2 + (1 - t)^2 \end{array}\right]\end{split}\]

The first curve is the zero set of \(y - 2x(1 - x)\), so plugging in the second curve gives

\[0 = t^2 + (1 - t)^2 - 2t(1 - t) = (2t - 1)^2.\]

This shows that a point of tangency is equivalent to a repeated root of a polynomial. For this example, the intersection process successfully terminates

>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0, 1.0],
...     [0.5, 0.0],
...     [1.0, 1.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
array([[ 0.5, 0.5]])
_images/curves1_and_6.png

However this library mostly avoids (for now) computing tangent intersections. For example, the curves

_images/curves14_and_15.png

have a tangent intersection that this library fails to compute:

>>> nodes1 = np.asfortranarray([
...     [0.0  , 0.0  ],
...     [0.375, 0.75 ],
...     [0.75 , 0.375],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.25 , 0.625],
...     [0.625, 0.25 ],
...     [1.0  , 1.0  ],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
Traceback (most recent call last):
  ...
NotImplementedError: Line segments parallel.

This failure comes from the fact that the linear approximations of the curves near the point of intersection are parallel.

As above, we can find some cases where tangent intersections are resolved:

>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [4.5, 9.0],
...     [9.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [3.0, 4.5],
...     [8.0, 4.5],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=1)
>>> curve1.intersect(curve2)
array([[ 4.5, 4.5]])
_images/curves10_and_23.png

but even by rotating an intersection (from above) that we know works

_images/curves28_and_29.png

we still see a failure

>>> nodes1 = np.asfortranarray([
...     [ 0.0, 0.0],
...     [-0.5, 1.5],
...     [ 1.0, 1.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [-1.0, 1.0],
...     [ 0.5, 0.5],
...     [ 0.0, 2.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
Traceback (most recent call last):
  ...
NotImplementedError: The number of candidate intersections is too high.

In addition to points of tangency, coincident curve segments are (for now) not supported. For the curves

_images/curves1_and_24.png

the library fails as well

>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.25,  0.375],
...     [0.75,  0.875],
...     [1.25, -0.625],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> curve1.intersect(curve2)
Traceback (most recent call last):
  ...
NotImplementedError: The number of candidate intersections is too high.

Algorithm Helpers

In an attempt to thoroughly vet each algorithm used in this library, each computation is split into small units that can be tested independently.

Though many of these computational units aren’t provided as part of the public interface of this library, they are still interesting. (Possibly) more importantly, it’s useful to see these algorithms at work.

In this document, these helper functions and objects are documented. This is to help with the exposition of the computation and does not imply that these are part of the stable public interface.

bezier._intersection_helpers._linearization_error(nodes)

Compute the maximum error of a linear approximation.

Helper for Linearization, which is used during the curve-curve intersection process.

We use the line

\[L(s) = v_0 (1 - s) + v_n s\]

and compute a bound on the maximum error

\[\max_{s \in \left[0, 1\right]} \|B(s) - L(s)\|_2.\]

Rather than computing the actual maximum (a tight bound), we use an upper bound via the remainder from Lagrange interpolation in each component. This leaves us with \(\frac{s(s - 1)}{2!}\) times the second derivative in each component.

The second derivative curve is degree \(d = n - 2\) and is given by

\[B''(s) = n(n - 1) \sum_{j = 0}^{d} \binom{d}{j} s^j (1 - s)^{d - j} \cdot \Delta^2 v_j\]

Due to this form (and the convex combination property of Bézier Curves) we know each component of the second derivative will be bounded by the maximum of that component among the \(\Delta^2 v_j\).

For example, the curve

\[\begin{split}B(s) = \left[\begin{array}{c} 0 \\ 0 \end{array}\right] (1 - s)^2 + \left[\begin{array}{c} 3 \\ 1 \end{array}\right] 2s(1 - s) + \left[\begin{array}{c} 9 \\ -2 \end{array}\right] s^2\end{split}\]

has \(B''(s) \equiv \left[\begin{array}{c} 6 \\ -8 \end{array}\right]\) which has norm \(10\) everywhere, hence the maximum error is

\[\left.\frac{s(1 - s)}{2!} \cdot 10\right|_{s = \frac{1}{2}} = \frac{5}{4}.\]
_images/linearization_error.png
>>> nodes = np.asfortranarray([
...     [0.0,  0.0],
...     [3.0,  1.0],
...     [9.0, -2.0],
... ])
>>> linearization_error(nodes)
1.25

As a non-example, consider a “pathological” set of control points:

\[\begin{split}B(s) = \left[\begin{array}{c} 0 \\ 0 \end{array}\right] (1 - s)^3 + \left[\begin{array}{c} 5 \\ 12 \end{array}\right] 3s(1 - s)^2 + \left[\begin{array}{c} 10 \\ 24 \end{array}\right] 3s^2(1 - s) + \left[\begin{array}{c} 30 \\ 72 \end{array}\right] s^3\end{split}\]

By construction, this lies on the line \(y = \frac{12x}{5}\), but the parametrization is cubic: \(12 \cdot x(s) = 5 \cdot y(s) = 180s(s^2 + 1)\). Hence, the fact that the curve is a line is not accounted for and we take the worse case among the nodes in:

\[\begin{split}B''(s) = 3 \cdot 2 \cdot \left( \left[\begin{array}{c} 0 \\ 0 \end{array}\right] (1 - s) + \left[\begin{array}{c} 15 \\ 36 \end{array}\right] s\right)\end{split}\]

which gives a nonzero maximum error:

>>> nodes = np.asfortranarray([
...     [ 0.0,  0.0],
...     [ 5.0, 12.0],
...     [10.0, 24.0],
...     [30.0, 72.0],
... ])
>>> linearization_error(nodes)
29.25

Though it may seem that 0 is a more appropriate answer, consider the goal of this function. We seek to linearize curves and then intersect the linear approximations. Then the \(s\)-values from the line-line intersection is lifted back to the curves. Thus the error \(\|B(s) - L(s)\|_2\) is more relevant than the underyling algebraic curve containing \(B(s)\).

Note

It may be more appropriate to use a relative linearization error rather than the absolute error provided here. It’s unclear if the domain \(\left[0, 1\right]\) means the error is already adequately scaled or if the error should be scaled by the arc length of the curve or the (easier-to-compute) length of the line.

Note

There is also a Fortran implementation of this function, which will be used if it can be built.

Parameters:nodes (numpy.ndarray) – Nodes of a curve.
Returns:The maximum error between the curve and the linear approximation.
Return type:float
bezier._intersection_helpers._newton_refine(s, nodes1, t, nodes2)

Apply one step of 2D Newton’s method.

We want to use Newton’s method on the function

\[F(s, t) = B_1(s) - B_2(t)\]

to refine \(\left(s_{\ast}, t_{\ast}\right)\). Using this, and the Jacobian \(DF\), we “solve”

\[\begin{split}\left[\begin{array}{c} 0 \\ 0 \end{array}\right] \approx F\left(s_{\ast} + \Delta s, t_{\ast} + \Delta t\right) \approx F\left(s_{\ast}, t_{\ast}\right) + \left[\begin{array}{c c} B_1'\left(s_{\ast}\right) & - B_2'\left(t_{\ast}\right) \end{array}\right] \left[\begin{array}{c} \Delta s \\ \Delta t \end{array}\right]\end{split}\]

and refine with the component updates \(\Delta s\) and \(\Delta t\).

Note

This implementation assumes the curves live in \(\mathbf{R}^2\).

For example, the curves

\[\begin{split}\begin{align*} B_1(s) &= \left[\begin{array}{c} 0 \\ 0 \end{array}\right] (1 - s)^2 + \left[\begin{array}{c} 2 \\ 4 \end{array}\right] 2s(1 - s) + \left[\begin{array}{c} 4 \\ 0 \end{array}\right] s^2 \\ B_2(t) &= \left[\begin{array}{c} 2 \\ 0 \end{array}\right] (1 - t) + \left[\begin{array}{c} 0 \\ 3 \end{array}\right] t \end{align*}\end{split}\]

intersect at the point \(B_1\left(\frac{1}{4}\right) = B_2\left(\frac{1}{2}\right) = \frac{1}{2} \left[\begin{array}{c} 2 \\ 3 \end{array}\right]\).

However, starting from the wrong point we have

\[\begin{split}\begin{align*} F\left(\frac{3}{8}, \frac{1}{4}\right) &= \frac{1}{8} \left[\begin{array}{c} 0 \\ 9 \end{array}\right] \\ DF\left(\frac{3}{8}, \frac{1}{4}\right) &= \left[\begin{array}{c c} 4 & 2 \\ 2 & -3 \end{array}\right] \\ \Longrightarrow \left[\begin{array}{c} \Delta s \\ \Delta t \end{array}\right] &= \frac{9}{64} \left[\begin{array}{c} -1 \\ 2 \end{array}\right]. \end{align*}\end{split}\]
_images/newton_refine1.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [2.0, 4.0],
...     [4.0, 0.0],
... ])
>>> nodes2 = np.asfortranarray([
...     [2.0, 0.0],
...     [0.0, 3.0],
... ])
>>> s, t = 0.375, 0.25
>>> new_s, new_t = newton_refine(s, nodes1, t, nodes2)
>>> 64.0 * (new_s - s)
-9.0
>>> 64.0 * (new_t - t)
18.0

For “typical” curves, we converge to a solution quadratically. This means that the number of correct digits doubles every iteration (until machine precision is reached).

_images/newton_refine2.png
>>> nodes1 = np.asfortranarray([
...     [0.0 ,  0.0],
...     [0.25,  2.0],
...     [0.5 , -2.0],
...     [0.75,  2.0],
...     [1.0 ,  0.0],
... ])
>>> nodes2 = np.asfortranarray([
...     [0.0 , 1.0],
...     [0.25, 0.5],
...     [0.5 , 0.5],
...     [0.75, 0.5],
...     [1.0 , 0.0],
... ])
>>> # The expected intersection is the only real root of
>>> # 28 s^3 - 30 s^2 + 9 s - 1.
>>> omega = (28.0 * np.sqrt(17.0) + 132.0)**(1.0 / 3.0) / 28.0
>>> expected = 5.0 / 14.0 + omega + 1 / (49.0 * omega)
>>> s_vals = [0.625, None, None, None, None]
>>> t = 0.625
>>> np.log2(abs(expected - s_vals[0]))
-4.399...
>>> s_vals[1], t = newton_refine(s_vals[0], nodes1, t, nodes2)
>>> np.log2(abs(expected - s_vals[1]))
-7.901...
>>> s_vals[2], t = newton_refine(s_vals[1], nodes1, t, nodes2)
>>> np.log2(abs(expected - s_vals[2]))
-16.010...
>>> s_vals[3], t = newton_refine(s_vals[2], nodes1, t, nodes2)
>>> np.log2(abs(expected - s_vals[3]))
-32.110...
>>> s_vals[4], t = newton_refine(s_vals[3], nodes1, t, nodes2)
>>> s_vals[4] == expected
True

However, when the intersection occurs at a point of tangency, the convergence becomes linear. This means that the number of correct digits added each iteration is roughly constant.

_images/newton_refine3.png
>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> nodes2 = np.asfortranarray([
...     [0.0, 0.5],
...     [1.0, 0.5],
... ])
>>> expected = 0.5
>>> s_vals = [0.375, None, None, None, None, None]
>>> t = 0.375
>>> np.log2(abs(expected - s_vals[0]))
-3.0
>>> s_vals[1], t = newton_refine(s_vals[0], nodes1, t, nodes2)
>>> np.log2(abs(expected - s_vals[1]))
-4.0
>>> s_vals[2], t = newton_refine(s_vals[1], nodes1, t, nodes2)
>>> np.log2(abs(expected - s_vals[2]))
-5.0
>>> s_vals[3], t = newton_refine(s_vals[2], nodes1, t, nodes2)
>>> np.log2(abs(expected - s_vals[3]))
-6.0
>>> s_vals[4], t = newton_refine(s_vals[3], nodes1, t, nodes2)
>>> np.log2(abs(expected - s_vals[4]))
-7.0
>>> s_vals[5], t = newton_refine(s_vals[4], nodes1, t, nodes2)
>>> np.log2(abs(expected - s_vals[5]))
-8.0

Unfortunately, the process terminates with an error that is not close to machine precision \(\varepsilon\) when \(\Delta s = \Delta t = 0\).

>>> s1 = t1 = 0.5 - 0.5**27
>>> np.log2(0.5 - s1)
-27.0
>>> s2, t2 = newton_refine(s1, nodes1, t1, nodes2)
>>> s2 == t2
True
>>> np.log2(0.5 - s2)
-28.0
>>> s3, t3 = newton_refine(s2, nodes1, t2, nodes2)
>>> s3 == t3 == s2
True

Due to round-off near the point of tangency, the final error resembles \(\sqrt{\varepsilon}\) rather than machine precision as expected.

Note

The following is not implemented in this function. It’s just an exploration on how the shortcomings might be addressed.

However, this can be overcome. At the point of tangency, we want \(B_1'(s) \parallel B_2'(t)\). This can be checked numerically via

\[B_1'(s) \times B_2'(t) = 0.\]

For the last example (the one that converges linearly), this is

\[\begin{split}0 = \left[\begin{array}{c} 1 \\ 2 - 4s \end{array}\right] \times \left[\begin{array}{c} 1 \\ 0 \end{array}\right] = 4 s - 2.\end{split}\]

With this, we can modify Newton’s method to find a zero of the over-determined system

\[\begin{split}G(s, t) = \left[\begin{array}{c} B_0(s) - B_1(t) \\ B_1'(s) \times B_2'(t) \end{array}\right] = \left[\begin{array}{c} s - t \\ 2 s (1 - s) - \frac{1}{2} \\ 4 s - 2\end{array}\right].\end{split}\]

Since \(DG\) is \(3 \times 2\), we can’t invert it. However, we can find a least-squares solution:

\[\begin{split}\left(DG^T DG\right) \left[\begin{array}{c} \Delta s \\ \Delta t \end{array}\right] = -DG^T G.\end{split}\]

This only works if \(DG\) has full rank. In this case, it does since the submatrix containing the first and last rows has rank two:

\[\begin{split}DG = \left[\begin{array}{c c} 1 & -1 \\ 2 - 4 s & 0 \\ 4 & 0 \end{array}\right].\end{split}\]

Though this avoids a singular system, the normal equations have a condition number that is the square of the condition number of the matrix.

Starting from \(s = t = \frac{3}{8}\) as above:

>>> s0, t0 = 0.375, 0.375
>>> np.log2(0.5 - s0)
-3.0
>>> s1, t1 = modified_update(s0, t0)
>>> s1 == t1
True
>>> 1040.0 * s1
519.0
>>> np.log2(0.5 - s1)
-10.022...
>>> s2, t2 = modified_update(s1, t1)
>>> s2 == t2
True
>>> np.log2(0.5 - s2)
-31.067...
>>> s3, t3 = modified_update(s2, t2)
>>> s3 == t3 == 0.5
True
Parameters:
  • s (float) – Parameter of a near-intersection along the first curve.
  • nodes1 (numpy.ndarray) – Nodes of first curve forming intersection.
  • t (float) – Parameter of a near-intersection along the second curve.
  • nodes2 (numpy.ndarray) – Nodes of second curve forming intersection.
Returns:

The refined parameters from a single Newton step.

Return type:

Tuple [ float, float ]

bezier._intersection_helpers._segment_intersection(start0, end0, start1, end1)

Determine the intersection of two line segments.

Assumes each line is parametric

\[\begin{split}\begin{alignat*}{2} L_0(s) &= S_0 (1 - s) + E_0 s &&= S_0 + s \Delta_0 \\ L_1(t) &= S_1 (1 - t) + E_1 t &&= S_1 + t \Delta_1. \end{alignat*}\end{split}\]

To solve \(S_0 + s \Delta_0 = S_1 + t \Delta_1\), we use the cross product:

\[\left(S_0 + s \Delta_0\right) \times \Delta_1 = \left(S_1 + t \Delta_1\right) \times \Delta_1 \Longrightarrow s \left(\Delta_0 \times \Delta_1\right) = \left(S_1 - S_0\right) \times \Delta_1.\]

Similarly

\[\Delta_0 \times \left(S_0 + s \Delta_0\right) = \Delta_0 \times \left(S_1 + t \Delta_1\right) \Longrightarrow \left(S_1 - S_0\right) \times \Delta_0 = \Delta_0 \times \left(S_0 - S_1\right) = t \left(\Delta_0 \times \Delta_1\right).\]

Note

Since our points are in \(\mathbf{R}^2\), the “traditional” cross product in \(\mathbf{R}^3\) will always point in the \(z\) direction, so in the above we mean the \(z\) component of the cross product, rather than the entire vector.

For example, the diagonal lines

\[\begin{split}\begin{align*} L_0(s) &= \left[\begin{array}{c} 0 \\ 0 \end{array}\right] (1 - s) + \left[\begin{array}{c} 2 \\ 2 \end{array}\right] s \\ L_1(t) &= \left[\begin{array}{c} -1 \\ 2 \end{array}\right] (1 - t) + \left[\begin{array}{c} 1 \\ 0 \end{array}\right] t \end{align*}\end{split}\]

intersect at \(L_0\left(\frac{1}{4}\right) = L_1\left(\frac{3}{4}\right) = \frac{1}{2} \left[\begin{array}{c} 1 \\ 1 \end{array}\right]\).

_images/segment_intersection1.png
>>> start0 = np.asfortranarray([[0.0, 0.0]])
>>> end0 = np.asfortranarray([[2.0, 2.0]])
>>> start1 = np.asfortranarray([[-1.0, 2.0]])
>>> end1 = np.asfortranarray([[1.0, 0.0]])
>>> s, t, _ = segment_intersection(start0, end0, start1, end1)
>>> s
0.25
>>> t
0.75

Taking the parallel (but different) lines

\[\begin{split}\begin{align*} L_0(s) &= \left[\begin{array}{c} 1 \\ 0 \end{array}\right] (1 - s) + \left[\begin{array}{c} 0 \\ 1 \end{array}\right] s \\ L_1(t) &= \left[\begin{array}{c} -1 \\ 3 \end{array}\right] (1 - t) + \left[\begin{array}{c} 3 \\ -1 \end{array}\right] t \end{align*}\end{split}\]

we should be able to determine that the lines don’t intersect, but this function is not meant for that check:

_images/segment_intersection2.png
>>> start0 = np.asfortranarray([[1.0, 0.0]])
>>> end0 = np.asfortranarray([[0.0, 1.0]])
>>> start1 = np.asfortranarray([[-1.0, 3.0]])
>>> end1 = np.asfortranarray([[3.0, -1.0]])
>>> _, _, success = segment_intersection(start0, end0, start1, end1)
>>> bool(success)
False

Instead, we use parallel_different():

>>> bool(parallel_different(start0, end0, start1, end1))
True

Note

There is also a Fortran implementation of this function, which will be used if it can be built.

Parameters:
  • start0 (numpy.ndarray) – A 1x2 NumPy array that is the start vector \(S_0\) of the parametric line \(L_0(s)\).
  • end0 (numpy.ndarray) – A 1x2 NumPy array that is the end vector \(E_0\) of the parametric line \(L_0(s)\).
  • start1 (numpy.ndarray) – A 1x2 NumPy array that is the start vector \(S_1\) of the parametric line \(L_1(s)\).
  • end1 (numpy.ndarray) – A 1x2 NumPy array that is the end vector \(E_1\) of the parametric line \(L_1(s)\).
Returns:

Pair of \(s_{\ast}\) and \(t_{\ast}\) such that the lines intersect: \(L_0\left(s_{\ast}\right) = L_1\left(t_{\ast}\right)\) and then a boolean indicating if an intersection was found.

Return type:

Tuple [ float, float, bool ]

bezier._intersection_helpers._parallel_different(start0, end0, start1, end1)

Checks if two parallel lines ever meet.

Meant as a back-up when segment_intersection() fails.

Note

This function assumes but never verifies that the lines are parallel.

In the case that the segments are parallel and lie on the exact same line, finding a unique intersection is not possible. However, if they are parallel but on different lines, then there is a guarantee of no intersection.

In segment_intersection(), we utilized the normal form of the lines (via the cross product):

\[\begin{split}\begin{align*} L_0(s) \times \Delta_0 &\equiv S_0 \times \Delta_0 \\ L_1(t) \times \Delta_1 &\equiv S_1 \times \Delta_1 \end{align*}\end{split}\]

So, we can detect if \(S_1\) is on the first line by checking if

\[S_0 \times \Delta_0 \stackrel{?}{=} S_1 \times \Delta_0.\]

If it is not on the first line, then we are done, the segments don’t meet:

_images/parallel_different1.png
>>> # Line: y = 1
>>> start0 = np.asfortranarray([[0.0, 1.0]])
>>> end0 = np.asfortranarray([[1.0, 1.0]])
>>> # Vertical shift up: y = 2
>>> start1 = np.asfortranarray([[-1.0, 2.0]])
>>> end1 = np.asfortranarray([[3.0, 2.0]])
>>> bool(parallel_different(start0, end0, start1, end1))
True

If \(S_1\) is on the first line, we want to check that \(S_1\) and \(E_1\) define parameters outside of \(\left[0, 1\right]\). To compute these parameters:

\[L_1(t) = S_0 + s_{\ast} \Delta_0 \Longrightarrow s_{\ast} = \frac{\Delta_0^T \left( L_1(t) - S_0\right)}{\Delta_0^T \Delta_0}.\]

For example, the intervals \(\left[0, 1\right]\) and \(\left[\frac{3}{2}, 2\right]\) (via \(S_1 = S_0 + \frac{3}{2} \Delta_0\) and \(E_1 = S_0 + 2 \Delta_0\)) correspond to segments that don’t meet:

_images/parallel_different2.png
>>> start0 = np.asfortranarray([[1.0, 0.0]])
>>> delta0 = np.asfortranarray([[2.0, -1.0]])
>>> end0 = start0 + 1.0 * delta0
>>> start1 = start0 + 1.5 * delta0
>>> end1 = start0 + 2.0 * delta0
>>> bool(parallel_different(start0, end0, start1, end1))
True

but if the intervals overlap, like \(\left[0, 1\right]\) and \(\left[-1, \frac{1}{2}\right]\), the segments meet:

_images/parallel_different3.png
>>> start1 = start0 - 1.0 * delta0
>>> end1 = start0 + 0.5 * delta0
>>> bool(parallel_different(start0, end0, start1, end1))
False

Similarly, if the second interval completely contains the first, the segments meet:

_images/parallel_different4.png
>>> start1 = start0 + 3.0 * delta0
>>> end1 = start0 - 2.0 * delta0
>>> bool(parallel_different(start0, end0, start1, end1))
False

Note

This function doesn’t currently allow wiggle room around the desired value, i.e. the two values must be bitwise identical. However, the most “correct” version of this function likely should allow for some round off.

Parameters:
  • start0 (numpy.ndarray) – A 1x2 NumPy array that is the start vector \(S_0\) of the parametric line \(L_0(s)\).
  • end0 (numpy.ndarray) – A 1x2 NumPy array that is the end vector \(E_0\) of the parametric line \(L_0(s)\).
  • start1 (numpy.ndarray) – A 1x2 NumPy array that is the start vector \(S_1\) of the parametric line \(L_1(s)\).
  • end1 (numpy.ndarray) – A 1x2 NumPy array that is the end vector \(E_1\) of the parametric line \(L_1(s)\).
Returns:

Indicating if the lines are different.

Return type:

bool

bezier._curve_helpers.get_curvature(nodes, degree, tangent_vec, s)

Compute the signed curvature of a curve at \(s\).

Computed via

\[\frac{B'(s) \times B''(s)}{\left\lVert B'(s) \right\rVert_2^3}\]
_images/get_curvature.png
>>> nodes = np.asfortranarray([
...     [1.0 ,  0.0],
...     [0.75,  2.0],
...     [0.5 , -2.0],
...     [0.25,  2.0],
...     [0.0 ,  0.0],
... ])
>>> s = 0.5
>>> tangent_vec = hodograph(nodes, s)
>>> tangent_vec
array([[-1., 0.]])
>>> curvature = get_curvature(nodes, 4, tangent_vec, s)
>>> curvature
-12.0
Parameters:
  • nodes (numpy.ndarray) – The nodes of a curve.
  • degree (int) – The degree of the curve.
  • tangent_vec (numpy.ndarray) – The already computed value of \(B'(s)\)
  • s (float) – The parameter value along the curve.
Returns:

The signed curvature.

Return type:

float

bezier._curve_helpers.newton_refine(nodes, degree, point, s)

Refine a solution to \(B(s) = p\) using Newton’s method.

Computes updates via

\[\mathbf{0} \approx \left(B\left(s_{\ast}\right) - p\right) + B'\left(s_{\ast}\right) \Delta s\]

For example, consider the curve

\[\begin{split}B(s) = \left[\begin{array}{c} 0 \\ 0 \end{array}\right] (1 - s)^2 + \left[\begin{array}{c} 1 \\ 2 \end{array}\right] 2 (1 - s) s + \left[\begin{array}{c} 3 \\ 1 \end{array}\right] s^2\end{split}\]

and the point \(B\left(\frac{1}{4}\right) = \frac{1}{16} \left[\begin{array}{c} 9 \\ 13 \end{array}\right]\).

Starting from the wrong point \(s = \frac{3}{4}\), we have

\[\begin{split}\begin{align*} p - B\left(\frac{1}{2}\right) &= -\frac{1}{2} \left[\begin{array}{c} 3 \\ 1 \end{array}\right] \\ B'\left(\frac{1}{2}\right) &= \frac{1}{2} \left[\begin{array}{c} 7 \\ -1 \end{array}\right] \\ \Longrightarrow \frac{1}{4} \left[\begin{array}{c c} 7 & -1 \end{array}\right] \left[\begin{array}{c} 7 \\ -1 \end{array}\right] \Delta s &= -\frac{1}{4} \left[\begin{array}{c c} 7 & -1 \end{array}\right] \left[\begin{array}{c} 3 \\ 1 \end{array}\right] \\ \Longrightarrow \Delta s &= -\frac{2}{5} \end{align*}\end{split}\]
_images/newton_refine_curve.png
>>> nodes = np.asfortranarray([
...     [0.0, 0.0],
...     [1.0, 2.0],
...     [3.0, 1.0],
... ])
>>> curve = bezier.Curve(nodes, degree=2)
>>> point = curve.evaluate(0.25)
>>> point
array([[ 0.5625, 0.8125]])
>>> s = 0.75
>>> new_s = newton_refine(nodes, 2, point, s)
>>> 5 * (new_s - s)
-2.0

On curves that are not “valid” (i.e. \(B(s)\) is not injective with non-zero gradient), Newton’s method may break down and converge linearly:

_images/newton_refine_curve_cusp.png
>>> nodes = np.asfortranarray([
...     [ 6.0, -3.0],
...     [-2.0,  3.0],
...     [-2.0, -3.0],
...     [ 6.0,  3.0],
... ])
>>> curve = bezier.Curve(nodes, degree=3)
>>> expected = 0.5
>>> point = curve.evaluate(expected)
>>> point
array([[ 0., 0.]])
>>> s_vals = [0.625, None, None, None, None, None]
>>> np.log2(abs(expected - s_vals[0]))
-3.0
>>> s_vals[1] = newton_refine(nodes, 3, point, s_vals[0])
>>> np.log2(abs(expected - s_vals[1]))
-3.983...
>>> s_vals[2] = newton_refine(nodes, 3, point, s_vals[1])
>>> np.log2(abs(expected - s_vals[2]))
-4.979...
>>> s_vals[3] = newton_refine(nodes, 3, point, s_vals[2])
>>> np.log2(abs(expected - s_vals[3]))
-5.978...
>>> s_vals[4] = newton_refine(nodes, 3, point, s_vals[3])
>>> np.log2(abs(expected - s_vals[4]))
-6.978...
>>> s_vals[5] = newton_refine(nodes, 3, point, s_vals[4])
>>> np.log2(abs(expected - s_vals[5]))
-7.978...

Due to round-off, the Newton process terminates with an error that is not close to machine precision \(\varepsilon\) when \(\Delta s = 0\).

>>> s_vals = [0.625]
>>> new_s = newton_refine(nodes, 3, point, s_vals[-1])
>>> while new_s not in s_vals:
...     s_vals.append(new_s)
...     new_s = newton_refine(nodes, 3, point, s_vals[-1])
...
>>> terminal_s = s_vals[-1]
>>> terminal_s == newton_refine(nodes, 3, point, terminal_s)
True
>>> 2.0**(-31) <= abs(terminal_s - 0.5) <= 2.0**(-29)
True

Due to round-off near the cusp, the final error resembles \(\sqrt{\varepsilon}\) rather than machine precision as expected.

Parameters:
  • nodes (numpy.ndarray) – The nodes defining a Bézier curve.
  • degree (int) – The degree of the curve (assumed to be one less than the number of nodes.
  • point (numpy.ndarray) – A point on the curve.
  • s (float) – An “almost” solution to \(B(s) = p\).
Returns:

The updated value \(s + \Delta s\).

Return type:

float

class bezier._intersection_helpers.Intersection(first, s, second, t, point=None, interior_curve=None)

Representation of a curve-curve intersection.

Parameters:
  • first (Curve) – The “first” curve in the intersection.
  • s (float) – The parameter along first where the intersection occurs.
  • second (Curve) – The “second” curve in the intersection.
  • t (float) – The parameter along second where the intersection occurs.
  • point (Optional [ numpy.ndarray ]) – The point where the two curves actually intersect.
  • interior_curve (Optional [ IntersectionClassification ]) – The classification of the intersection.
get_point()

The point where the intersection occurs.

This exists primarily for Curve.intersect().

Returns:The point where the intersection occurs. Returns point if stored on the current value, otherwise computes the value on the fly.
Return type:numpy.ndarray
class bezier._intersection_helpers.Linearization(curve, error)

A linearization of a curve.

This class is provided as a stand-in for a curve, so it provides a similar interface.

Parameters:
  • curve (Curve) – A curve that is linearized.
  • error (float) – The linearization error. Expected to have been computed via linearization_error().
classmethod from_shape(shape)

Try to linearize a curve (or an already linearized curve).

Parameters:shape (Union [ Curve, Linearization ]) – A curve or an already linearized curve.
Returns:The (potentially linearized) curve.
Return type:Union [ Curve, Linearization ]
subdivide()

Do-nothing method to match the Curve interface.

Returns:List of all subdivided parts, which is just the current object.
Return type:Tuple [ Linearization ]
class bezier._surface_helpers.IntersectionClassification

Enum classifying the “interior” curve in an intersection.

Provided as the output values for classify_intersection().

bezier._surface_helpers.newton_refine(nodes, degree, x_val, y_val, s, t)

Refine a solution to \(B(s, t) = p\) using Newton’s method.

Computes updates via

\[\begin{split}\left[\begin{array}{c} 0 \\ 0 \end{array}\right] \approx \left(B\left(s_{\ast}, t_{\ast}\right) - \left[\begin{array}{c} x \\ y \end{array}\right]\right) + \left[\begin{array}{c c} B_s\left(s_{\ast}, t_{\ast}\right) & B_t\left(s_{\ast}, t_{\ast}\right) \end{array}\right] \left[\begin{array}{c} \Delta s \\ \Delta t \end{array}\right]\end{split}\]

For example, (with weights \(\lambda_1 = 1 - s - t, \lambda_2 = s, \lambda_3 = t\)) consider the surface

\[\begin{split}B(s, t) = \left[\begin{array}{c} 0 \\ 0 \end{array}\right] \lambda_1^2 + \left[\begin{array}{c} 1 \\ 0 \end{array}\right] 2 \lambda_1 \lambda_2 + \left[\begin{array}{c} 2 \\ 0 \end{array}\right] \lambda_2^2 + \left[\begin{array}{c} 2 \\ 1 \end{array}\right] 2 \lambda_1 \lambda_3 + \left[\begin{array}{c} 2 \\ 2 \end{array}\right] 2 \lambda_2 \lambda_1 + \left[\begin{array}{c} 0 \\ 2 \end{array}\right] \lambda_3^2\end{split}\]

and the point \(B\left(\frac{1}{4}, \frac{1}{2}\right) = \frac{1}{4} \left[\begin{array}{c} 5 \\ 5 \end{array}\right]\).

Starting from the wrong point \(s = \frac{1}{2}, t = \frac{1}{4}\), we have

\[\begin{split}\begin{align*} \left[\begin{array}{c} x \\ y \end{array}\right] - B\left(\frac{1}{2}, \frac{1}{4}\right) &= \frac{1}{4} \left[\begin{array}{c} -1 \\ 2 \end{array}\right] \\ DB\left(\frac{1}{2}, \frac{1}{4}\right) &= \frac{1}{2} \left[\begin{array}{c c} 3 & 2 \\ 1 & 6 \end{array}\right] \\ \Longrightarrow \left[\begin{array}{c} \Delta s \\ \Delta t \end{array}\right] &= \frac{1}{32} \left[\begin{array}{c} -10 \\ 7 \end{array}\right] \end{align*}\end{split}\]
_images/newton_refine_surface.png
>>> nodes = np.asfortranarray([
...     [0.0, 0.0],
...     [1.0, 0.0],
...     [2.0, 0.0],
...     [2.0, 1.0],
...     [2.0, 2.0],
...     [0.0, 2.0],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> surface.is_valid
True
>>> (x_val, y_val), = surface.evaluate_cartesian(0.25, 0.5)
>>> x_val, y_val
(1.25, 1.25)
>>> s, t = 0.5, 0.25
>>> new_s, new_t = newton_refine(nodes, 2, x_val, y_val, s, t)
>>> 32 * (new_s - s)
-10.0
>>> 32 * (new_t - t)
7.0
Parameters:
  • nodes (numpy.ndarray) – Array of nodes in a surface.
  • degree (int) – The degree of the surface.
  • x_val (float) – The \(x\)-coordinate of a point on the surface.
  • y_val (float) – The \(y\)-coordinate of a point on the surface.
  • s (float) – Approximate \(s\)-value to be refined.
  • t (float) – Approximate \(t\)-value to be refined.
Returns:

The refined \(s\) and \(t\) values.

Return type:

Tuple [ float, float ]

bezier._surface_helpers.classify_intersection(intersection)

Determine which curve is on the “inside of the intersection”.

This is intended to be a helper for forming a CurvedPolygon from the edge intersections of two Surface-s. In order to move from one intersection to another (or to the end of an edge), the interior edge must be determined at the point of intersection.

The “typical” case is on the interior of both edges:

_images/classify_intersection1.png
>>> nodes1 = np.asfortranarray([
...     [1.0 , 0.0 ],
...     [1.75, 0.25],
...     [2.0 , 1.0 ],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0   , 0.0   ],
...     [1.6875, 0.0625],
...     [2.0   , 0.5   ],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> s, t = 0.25, 0.5
>>> curve1.evaluate(s) == curve2.evaluate(t)
array([[ True, True]], dtype=bool)
>>> tangent1 = hodograph(curve1, s)
>>> tangent1
array([[ 1.25, 0.75]])
>>> tangent2 = hodograph(curve2, t)
>>> tangent2
array([[ 2. , 0.5]])
>>> intersection = Intersection(curve1, s, curve2, t)
>>> classify_intersection(intersection)
<IntersectionClassification.first: 'first'>

We determine the interior (i.e. left) one by using the right-hand rule: by embedding the tangent vectors in \(\mathbf{R}^3\), we compute

\[\begin{split}\left[\begin{array}{c} x_1'(s) \\ y_1'(s) \\ 0 \end{array}\right] \times \left[\begin{array}{c} x_2'(t) \\ y_2'(t) \\ 0 \end{array}\right] = \left[\begin{array}{c} 0 \\ 0 \\ x_1'(s) y_2'(t) - x_2'(t) y_1'(s) \end{array}\right].\end{split}\]

If the cross product quantity \(B_1'(s) \times B_2'(t) = x_1'(s) y_2'(t) - x_2'(t) y_1'(s)\) is positive, then the first curve is “outside” / “to the right”, i.e. the second curve is interior. If the cross product is negative, the first curve is interior.

When \(B_1'(s) \times B_2'(t) = 0\), the tangent vectors are parallel, i.e. the intersection is a point of tangency:

_images/classify_intersection2.png
>>> nodes1 = np.asfortranarray([
...     [1.0, 0.0],
...     [1.5, 1.0],
...     [2.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0, 0.0],
...     [1.5, 1.0],
...     [3.0, 0.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> s, t = 0.5, 0.5
>>> curve1.evaluate(s) == curve2.evaluate(t)
array([[ True, True]], dtype=bool)
>>> intersection = Intersection(curve1, s, curve2, t)
>>> classify_intersection(intersection)
<IntersectionClassification.tangent_second: 'tangent_second'>

Depending on the direction of the parameterizations, the interior curve may change, but we can use the (signed) curvature of each curve at that point to determine which is on the interior:

_images/classify_intersection3.png
>>> nodes1 = np.asfortranarray([
...     [2.0, 0.0],
...     [1.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [3.0, 0.0],
...     [1.5, 1.0],
...     [0.0, 0.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> s, t = 0.5, 0.5
>>> curve1.evaluate(s) == curve2.evaluate(t)
array([[ True, True]], dtype=bool)
>>> intersection = Intersection(curve1, s, curve2, t)
>>> classify_intersection(intersection)
<IntersectionClassification.tangent_first: 'tangent_first'>

When the curves are moving in opposite directions at a point of tangency, there is no side to choose. Either the point of tangency is not part of any CurvedPolygon intersection

_images/classify_intersection4.png
>>> nodes1 = np.asfortranarray([
...     [2.0, 0.0],
...     [1.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [0.0, 0.0],
...     [1.5, 1.0],
...     [3.0, 0.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> s, t = 0.5, 0.5
>>> curve1.evaluate(s) == curve2.evaluate(t)
array([[ True, True]], dtype=bool)
>>> intersection = Intersection(curve1, s, curve2, t)
>>> classify_intersection(intersection)
<IntersectionClassification.opposed: 'opposed'>

or the point of tangency is a “degenerate” part of two CurvedPolygon intersections. It is “degenerate” because from one direction, the point should be classified as 1 and from another as 0:

_images/classify_intersection5.png
>>> nodes1 = np.asfortranarray([
...     [1.0, 0.0],
...     [1.5, 1.0],
...     [2.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [3.0, 0.0],
...     [1.5, 1.0],
...     [0.0, 0.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> s, t = 0.5, 0.5
>>> curve1.evaluate(s) == curve2.evaluate(t)
array([[ True, True]], dtype=bool)
>>> intersection = Intersection(curve1, s, curve2, t)
>>> classify_intersection(intersection)
Traceback (most recent call last):
  ...
NotImplementedError: Curves moving in opposite direction
                     but define overlapping arcs.

However, if the curvature of each curve is identical, we don’t try to distinguish further:

_images/classify_intersection6.png
>>> nodes1 = np.asfortranarray([
...     [ 0.375,  0.0625],
...     [-0.125, -0.0625],
...     [-0.125,  0.0625],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [ 0.75,  0.25],
...     [-0.25, -0.25],
...     [-0.25,  0.25],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> s, t = 0.5, 0.5
>>> curve1.evaluate(s) == curve2.evaluate(t)
array([[ True, True]], dtype=bool)
>>> hodograph(curve1, s)
array([[-0.5, 0. ]])
>>> hodograph(curve2, t)
array([[-1., 0.]])
>>> curvature(curve1, s)
-2.0
>>> curvature(curve2, t)
-2.0
>>> intersection = Intersection(curve1, s, curve2, t)
>>> classify_intersection(intersection)
Traceback (most recent call last):
  ...
NotImplementedError: Tangent curves have same curvature.

In addition to points of tangency, intersections that happen at the end of an edge need special handling:

_images/classify_intersection7.png
>>> nodes1a = np.asfortranarray([
...     [0.0, 0.0 ],
...     [4.5, 0.0 ],
...     [9.0, 2.25],
... ])
>>> curve1a = bezier.Curve(nodes1a, degree=2)
>>> nodes2 = np.asfortranarray([
...     [11.25, 0.0],
...     [ 9.0 , 4.5],
...     [ 2.75, 1.0],
... ])
>>> curve2 = bezier.Curve(nodes2, degree=2)
>>> s, t = 1.0, 0.375
>>> curve1a.evaluate(s) == curve2.evaluate(t)
array([[ True, True]], dtype=bool)
>>> intersection = Intersection(curve1a, s, curve2, t)
>>> classify_intersection(intersection)
Traceback (most recent call last):
  ...
ValueError: ('Intersection occurs at the end of an edge',
             's', 1.0, 't', 0.375)
>>>
>>> nodes1b = np.asfortranarray([
...     [9.0, 2.25 ],
...     [4.5, 2.375],
...     [0.0, 2.5  ],
... ])
>>> curve1b = bezier.Curve(nodes1b, degree=2)
>>> curve1b.evaluate(0.0) == curve2.evaluate(t)
array([[ True, True]], dtype=bool)
>>> curve1b._previous_edge = curve1a
>>> intersection = Intersection(curve1b, 0.0, curve2, t)
>>> classify_intersection(intersection)
<IntersectionClassification.first: 'first'>

As above, some intersections at the end of an edge are part of an actual intersection. However, some surfaces may just “kiss” at a corner intersection:

_images/classify_intersection8.png
>>> nodes1 = np.asfortranarray([
...     [0.25 , 1.0  ],
...     [0.0  , 0.5  ],
...     [0.0  , 0.0  ],
...     [0.625, 0.875],
...     [0.5  , 0.375],
...     [1.0  , 0.75 ],
... ])
>>> surface1 = bezier.Surface(nodes1, degree=2)
>>> nodes2 = np.asfortranarray([
...     [ 0.0625, 0.5  ],
...     [-0.25  , 1.0  ],
...     [-1.0   , 1.0  ],
...     [-0.5   , 0.125],
...     [-1.0   , 0.5  ],
...     [-1.0   , 0.0  ],
... ])
>>> surface2 = bezier.Surface(nodes2, degree=2)
>>> curve1, _, _ = surface1.edges
>>> curve2, _, _ = surface2.edges
>>> s, t = 0.5, 0.0
>>> curve1.evaluate(s) == curve2.evaluate(t)
array([[ True, True]], dtype=bool)
>>> intersection = Intersection(curve1, s, curve2, t)
>>> classify_intersection(intersection)
<IntersectionClassification.ignored_corner: 'ignored_corner'>

Note

This assumes the intersection occurs in \(\mathbf{R}^2\) but doesn’t check this.

Note

This function doesn’t allow wiggle room / round-off when checking endpoints, nor when checking if the cross product is near zero, nor when curvatures are compared. However, the most “correct” version of this function likely should allow for some round off.

Parameters:intersection (Intersection) – An intersection object.
Returns:The “inside” curve type, based on the classification enum.
Return type:IntersectionClassification
Raises:ValueError – If the intersection occurs at the end of either curve involved. This is because we want to classify which curve to move forward on, and we can’t move past the end of a segment.
bezier._surface_helpers._jacobian_det(nodes, degree, st_vals)

Compute \(\det(D B)\) at a set of values.

This requires that \(B \in \mathbf{R}^2\).

Note

This assumes but does not check that each (s, t) in st_vals is inside the reference triangle.

Warning

This relies on helpers in bezier for computing the Jacobian of the surface. However, these helpers are not part of the public surface and may change or be removed.

>>> nodes = np.asfortranarray([
...     [0.0, 0.0],
...     [1.0, 0.0],
...     [2.0, 0.0],
...     [0.0, 1.0],
...     [1.5, 1.5],
...     [0.0, 2.0],
... ])
>>> surface = bezier.Surface(nodes, degree=2)
>>> st_vals = np.asfortranarray([
...     [0.25, 0.0  ],
...     [0.75, 0.125],
...     [0.5 , 0.5  ],
... ])
>>> s_vals, t_vals = st_vals.T
>>> surface.evaluate_cartesian_multi(st_vals)
array([[ 0.5    , 0.     ],
       [ 1.59375, 0.34375],
       [ 1.25   , 1.25   ]])
>>> # B(s, t) = [s(t + 2), t(s + 2)]
>>> s_vals * (t_vals + 2)
array([ 0.5 , 1.59375, 1.25 ])
>>> t_vals * (s_vals + 2)
array([ 0. , 0.34375, 1.25 ])
>>> jacobian_det(nodes, 2, st_vals)
array([ 4.5 , 5.75, 6. ])
>>> # det(DB) = 2(s + t + 2)
>>> 2 * (s_vals + t_vals + 2)
array([ 4.5 , 5.75, 6. ])
Parameters:
  • nodes (numpy.ndarray) – Nodes defining a Bézier surface \(B(s, t)\).
  • degree (int) – The degree of the surface \(B\).
  • st_vals (numpy.ndarray) – Nx2 array of Cartesian inputs to Bézier surfaces defined by \(B_s\) and \(B_t\).
Returns:

Array of all determinant values, one for each row in st_vals.

Return type:

numpy.ndarray

bezier._surface_helpers._2x2_det(mat)

Compute the determinant of a 2x2 matrix.

This is “needed” because numpy.linalg.det() uses a more generic determinant implementation which can introduce rounding even when the simple \(a d - b c\) will suffice. For example:

>>> import numpy as np
>>> mat = np.asfortranarray([
...     [-1.5   , 0.1875],
...     [-1.6875, 0.0   ],
... ])
>>> actual_det = -mat[0, 1] * mat[1, 0]
>>> np_det = np.linalg.det(mat)
>>> np.abs(actual_det - np_det) == np.spacing(actual_det)
True
Parameters:mat (numpy.ndarray) – A 2x2 matrix.
Returns:The determinant of mat.
Return type:float
bezier._implicitization.bezier_roots(coeffs)

Compute polynomial roots from a polynomial in the Bernstein basis.

Note

This assumes the caller passes in a 1D array but does not check.

This takes the polynomial

\[f(s) = \sum_{j = 0}^n b_{j, n} \cdot C_j.\]

and uses the variable \(\sigma = \frac{s}{1 - s}\) to rewrite as

\[\begin{split}\begin{align*} f(s) &= (1 - s)^n \sum_{j = 0}^n \binom{n}{j} C_j \sigma^j \\ &= (1 - s)^n \sum_{j = 0}^n \widetilde{C_j} \sigma^j. \end{align*}\end{split}\]

Then it uses an eigenvalue solver to find the roots of

\[g(\sigma) = \sum_{j = 0}^n \widetilde{C_j} \sigma^j\]

and convert them back into roots of \(f(s)\) via \(s = \frac{\sigma}{1 + \sigma}\).

For example, consider

\[\begin{split}\begin{align*} f_0(s) &= 2 (2 - s)(3 + s) \\ &= 12(1 - s)^2 + 11 \cdot 2s(1 - s) + 8 s^2 \end{align*}\end{split}\]

First, we compute the companion matrix for

\[g_0(\sigma) = 12 + 22 \sigma + 8 \sigma^2\]
>>> coeffs0 = np.asfortranarray([12.0, 11.0, 8.0])
>>> companion0, _, _ = bernstein_companion(coeffs0)
>>> companion0
array([[-2.75, -1.5 ],
       [ 1.  ,  0.  ]])

then take the eigenvalues of the companion matrix:

>>> sigma_values0 = np.linalg.eigvals(companion0)
>>> sigma_values0
array([-2.  , -0.75])

after transforming them, we have the roots of \(f(s)\):

>>> sigma_values0 / (1.0 + sigma_values0)
array([ 2., -3.])
>>> bezier_roots(coeffs0)
array([ 2., -3.])

In cases where \(s = 1\) is a root, the lead coefficient of \(g\) would be \(0\), so there is a reduction in the companion matrix.

\[\begin{split}\begin{align*} f_1(s) &= 6 (s - 1)^2 (s - 3) (s - 5) \\ &= 90 (1 - s)^4 + 33 \cdot 4s(1 - s)^3 + 8 \cdot 6s^2(1 - s)^2 \end{align*}\end{split}\]
>>> coeffs1 = np.asfortranarray([90.0, 33.0, 8.0, 0.0, 0.0])
>>> companion1, degree1, effective_degree1 = bernstein_companion(
...     coeffs1)
>>> companion1
array([[-2.75 , -1.875],
       [ 1.   ,  0.   ]])
>>> degree1
4
>>> effective_degree1
2

so the roots are a combination of the roots determined from \(s = \frac{\sigma}{1 + \sigma}\) and the number of factors of \((1 - s)\) (i.e. the difference between the degree and the effective degree):

>>> bezier_roots(coeffs1)
array([ 3.,  5.,  1.,  1.])

In some cases, a polynomial is represented with an “elevated” degree:

\[\begin{split}\begin{align*} f_2(s) &= 3 (s^2 + 1) \\ &= 3 (1 - s)^3 + 3 \cdot 3s(1 - s)^2 + 4 \cdot 3s^2(1 - s) + 6 s^3 \end{align*}\end{split}\]

This results in a “point at infinity” \(\sigma = -1 \Longleftrightarrow s = \infty\):

>>> coeffs2 = np.asfortranarray([3.0, 3.0, 4.0, 6.0])
>>> companion2, _, _ = bernstein_companion(coeffs2)
>>> companion2
array([[-2. , -1.5, -0.5],
       [ 1. ,  0. ,  0. ],
       [ 0. ,  1. ,  0. ]])
>>> sigma_values2 = np.linalg.eigvals(companion2)
>>> sigma_values2
array([-1.0+0.j , -0.5+0.5j, -0.5-0.5j])

so we drop any values \(\sigma\) that are sufficiently close to \(-1\):

>>> expected2 = np.asfortranarray([1.0j, -1.0j])
>>> roots2 = bezier_roots(coeffs2)
>>> np.allclose(expected2, roots2, rtol=2e-15, atol=0.0)
True
Parameters:coeffs (numpy.ndarray) – A 1D array of coefficients in the Bernstein basis.
Returns:A 1D array containing the roots.
Return type:numpy.ndarray
bezier._implicitization.lu_companion(top_row, value)

Compute an LU-factored \(C - t I\) and its 1-norm.

Note

The output of this function is intended to be used with dgecon from LAPACK. dgecon expects both the 1-norm of the matrix and expects the matrix to be passed in an already LU-factored form (via dgetrf).

The companion matrix \(C\) is given by the top_row, for example, the polynomial \(t^3 + 3 t^2 - t + 2\) has a top row of -3, 1, -2 and the corresponding companion matrix is:

\[\begin{split}\left[\begin{array}{c c c} -3 & 1 & -2 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{array}\right]\end{split}\]

After doing a full cycle of the rows (shifting the first to the last and moving all other rows up), row reduction of \(C - t I\) yields

\[\begin{split}\left[\begin{array}{c c c} 1 & -t & 0 \\ 0 & 1 & -t \\ -3 - t & 1 & -2 \end{array}\right] = \left[\begin{array}{c c c} 1 & 0 & 0 \\ 0 & 1 & 0 \\ -3 - t & 1 + t(-3 - t) & 1 \end{array}\right] \left[\begin{array}{c c c} 1 & -t & 0 \\ 0 & 1 & -t \\ 0 & 0 & -2 + t(1 + t(-3 - t)) \end{array}\right]\end{split}\]

and in general, the terms in the bottom row correspond to the intermediate values involved in evaluating the polynomial via Horner’s method.

>>> top_row = np.asfortranarray([-3.0, 1.0, -2.0])
>>> t_val = 0.5
>>> lu_mat, one_norm = lu_companion(top_row, t_val)
>>> lu_mat
array([[ 1.   , -0.5  ,  0.   ],
       [ 0.   ,  1.   , -0.5  ],
       [-3.5  , -0.75 , -2.375]])
>>> one_norm
4.5
>>> l_mat = np.tril(lu_mat, k=-1) + np.eye(3)
>>> u_mat = np.triu(lu_mat)
>>> a_mat = l_mat.dot(u_mat)
>>> a_mat
array([[ 1. , -0.5,  0. ],
       [ 0. ,  1. , -0.5],
       [-3.5,  1. , -2. ]])
>>> np.linalg.norm(a_mat, ord=1)
4.5
Parameters:
  • top_row (numpy.ndarray) – 1D array, top row of companion matrix.
  • value (float) – The \(t\) value used to form \(C - t I\).
Returns:

Pair of

  • 2D array of LU-factored form of \(C - t I\), with the non-diagonal part of \(L\) stored in the strictly lower triangle and \(U\) stored in the upper triangle (we skip the permutation matrix, as it won’t impact the 1-norm)
  • the 1-norm the matrix \(C - t I\)

As mentioned above, these two values are meant to be used with dgecon.

Return type:

Tuple [ numpy.ndarray, float ]

bezier._implicitization.bezier_value_check(coeffs, s_val, rhs_val=0.0)

Check if a polynomial in the Bernstein basis evaluates to a value.

This is intended to be used for root checking, i.e. for a polynomial \(f(s)\) and a particular value \(s_{\ast}\):

Is it true that \(f\left(s_{\ast}\right) = 0\)?

Does so by re-stating as a matrix rank problem. As in bezier_roots(), we can rewrite

\[f(s) = (1 - s)^n g\left(\sigma\right)\]

for \(\sigma = \frac{s}{1 - s}\) and \(g\left(\sigma\right)\) written in the power / monomial basis. Now, checking if \(g\left(\sigma\right) = 0\) is a matter of checking that \(\det\left(C_g - \sigma I\right) = 0\) (where \(C_g\) is the companion matrix of \(g\)).

Due to issues of numerical stability, we’d rather ask if \(C_g - \sigma I\) is singular to numerical precision. A typical approach to this using the singular values (assuming \(C_g\) is \(m \times m\)) is that the matrix is singular if

\[\sigma_m < m \varepsilon \sigma_1 \Longleftrightarrow \frac{1}{\kappa_2} < m \varepsilon\]

(where \(\kappa_2\) is the 2-norm condition number of the matrix). Since we also know that \(\kappa_2 < \kappa_1\), a stronger requirement would be

\[\frac{1}{\kappa_1} < \frac{1}{\kappa_2} < m \varepsilon.\]

This is more useful since it is much easier to compute the 1-norm condition number / reciprocal condition number (and methods in LAPACK are provided for doing so).

Parameters:
  • coeffs (numpy.ndarray) – A 1D array of coefficients in the Bernstein basis representing a polynomial.
  • s_val (float) – The value to check on the polynomial: \(f(s) = r\).
  • rhs_val (Optional [ float ]) – The value to check that the polynomial evaluates to. Defaults to 0.0.
Returns:

Indicates if \(f\left(s_{\ast}\right) = r\) (where \(s_{\ast}\) is s_val and \(r\) is rhs_val).

Return type:

bool

Native Libraries

bezier has optional speedups implemented in Fortran. These are incorporated into the Python interface via Cython.

The subroutines provided there can be called from Fortran, C, C++, Cython and any other language that can invoke a foreign C function (e.g. Go via cgo).

After bezier has been installed with these speedups, the library provides helpers to make it easier to build code that depends on them.

C Headers

The C headers for libbezier will be included in the source-tree

>>> include_directory = bezier.get_include()
>>> include_directory
'.../site-packages/bezier/include'
>>> print_tree(include_directory)
include/
  bezier/
    curve.h
    curve_intersection.h
    helpers.h
    surface.h
  bezier.h

Note that this includes a catch-all bezier.h that just includes all of the headers.

Cython .pxd Declarations

In addition to the header files, several cimport-able .pxd Cython declaration files are provided:

>>> bezier_directory = parent_directory(include_directory)
>>> bezier_directory
'.../site-packages/bezier'
>>> print_tree(bezier_directory, suffix='.pxd')
bezier/
  _curve.pxd
  _curve_intersection.pxd
  _helpers.pxd
  _surface.pxd

For example, cimport bezier._curve will provide all the functions in bezier/curve.h.

Static Library

The actual library libbezier is included as a single static library (a .lib file on Windows and a .a file elsewhere):

>>> lib_directory = bezier.get_lib()
>>> lib_directory
'.../site-packages/bezier/lib'
>>> print_tree(lib_directory)
lib/
  libbezier.a

Note

A static library is used (rather than a shared or dynamic library) because the “final” install location of the Python package is not dependable. Even on the same machine with the same operating system, bezier can be installed in virtual environments, in different Python versions, as an egg or wheel, and so on.

Warning

When bezier is installed via pip, it will likely be installed from a Python wheel. These wheels will be pre-built and the Fortran extensions will be compiled with GNU Fortran (gfortran). As a result, libbezier will depend on libgfortran.

This can be problematic due to version conflicts, ABI incompatibility, a desire to use a different Fortran compiler (e.g. ifort) and a host of other reasons. Some of the standard tooling for building wheels will try to address this by adding a bezier/.libs directory with a version of libgfortran that is compatible with libbezier, e.g.

.../site-packages/bezier/.libs/libgfortran-ed201abd.so.3.0.0

If present, this directory can be used when linking. If that is not feasible, then bezier can be built from source via:

$ python setup.py build_ext
$ # OR
$ python setup.py build_ext --fcompiler=${FC}

By providing a filename via an environment variable, a “journal” can be stored of the compiler commands invoked to build the extension:

$ export BEZIER_JOURNAL=path/to/journal.txt
$ python setup.py build_ext
$ unset BEZIER_JOURNAL

Building a Python Extension

To incorporate libbezier into a Python extension, either via Cython, C, C++ or some other means, simply include the header and library directories:

>>> import setuptools
>>>
>>> extension = setuptools.Extension(
...     'wrapper',
...     ['wrapper.c'],
...     include_dirs=[
...         bezier.get_include(),
...     ],
...     libraries=['bezier'],
...     library_dirs=[
...         bezier.get_lib(),
...     ],
... )
>>> extension
<setuptools.extension.Extension('wrapper') at 0x...>

Typically, depending on libbezier implies (transitive) dependence on libgfortran. See the warning in Static Library for more details.

Development

Adding Features

In order to add a feature to bezier:

  1. Discuss: File an issue to notify maintainer(s) of the proposed changes (i.e. just sending a large PR with a finished feature may catch maintainer(s) off guard).
  2. Add tests: The feature must work fully on the following CPython versions: 2.7, 3.5 and 3.6 on both UNIX and Windows. In addition, the feature should have 100% line coverage.
  3. Documentation: The feature must (should) be documented with helpful doctest examples wherever relevant.

Running Unit Tests

We recommend using nox (nox-automation) to run unit tests:

$ nox -s "unit_tests(python_version='2.7')"
$ nox -s "unit_tests(python_version='3.5')"
$ nox -s "unit_tests(python_version='3.6')"
$ nox -s "unit_tests(python_version='pypy')"
$ nox -s  unit_tests  # Run all versions

However, pytest can be used directly (though it won’t manage dependencies or build extensions):

$ PYTHONPATH=src/ python2.7 -m pytest tests/
$ PYTHONPATH=src/ python3.5 -m pytest tests/
$ PYTHONPATH=src/ python3.6 -m pytest tests/
$ PYTHONPATH=src/ MATPLOTLIBRC=test/ pypy -m pytest tests/

Native Code Extensions

Many low-level computations have alternate implementations in Fortran. When using nox, the bezier package will automatically be installed into a virtual environment and the native extensions will be built during install.

However, if you run the tests directly from the source tree via

$ PYTHONPATH=src/ python -m pytest tests/

some unit tests may be skipped. The unit tests for the Fortran implementations will skip (rather than fail) if the extensions aren’t compiled and present in the source tree. To compile the native extensions, make sure you have a valid Fortran compiler and run

$ python setup.py build_ext --inplace
$ # OR
$ python setup.py build_ext --inplace --fcompiler=${FC}

To actually make sure the correct compiler commands are invoked, provide a filename as the BEZIER_JOURNAL environment variable and then the commands invoked will be written there. The nox session check_journal uses this journaling option to verify the commands used to compile the extension on CircleCI.

Test Coverage

bezier has 100% line coverage. The coverage is checked on every build and uploaded to coveralls.io via the COVERALLS_REPO_TOKEN environment variable set in the CircleCI environment.

To run the coverage report locally:

$ nox -s cover
$ # OR
$ PYTHONPATH=src/:functional_tests/ python -m pytest \
>   --cov=bezier \
>   --cov=tests \
>   tests/ \
>   functional_tests/test_segment_box.py

Slow Tests

To run unit tests without tests that have been (explicitly) marked slow, use the --ignore-slow flag:

$ nox -s "unit_tests(python_version='2.7')" -- --ignore-slow
$ nox -s "unit_tests(python_version='3.5')" -- --ignore-slow
$ nox -s "unit_tests(python_version='3.6')" -- --ignore-slow
$ nox -s  unit_tests                        -- --ignore-slow

These slow tests have been identified via:

$ ...
$ nox -s "unit_tests(python_version='3.6')" -- --durations=10

and then marked with pytest.mark.skipif.

Slow Install

Installing NumPy with PyPy can take upwards of two minutes, which makes it prohibitive to create a new environment for testing.

In order to avoid this penalty, the WHEELHOUSE environment variable can be used to instruct nox to install NumPy from locally built wheels when installing the pypy sessions.

To pre-build a NumPy wheel:

$ pypy -m pip wheel --wheel-dir=${WHEELHOUSE} numpy

The Docker image for the CircleCI test environment has already pre-built this wheel and stored it in the /wheelhouse directory. So, in the CircleCI environment, the WHEELHOUSE environment variable is set to /wheelhouse.

Functional Tests

Line coverage and unit tests are not entirely sufficient to test numerical software. As a result, there is a fairly large collection of functional tests for bezier.

These give a broad sampling of curve-curve intersection, surface-surface intersection and segment-box intersection problems to check both the accuracy (i.e. detecting all intersections) and the precision of the detected intersections.

To run the functional tests:

$ nox -s "functional(python_version='2.7')"
$ nox -s "functional(python_version='3.5')"
$ nox -s "functional(python_version='3.6')"
$ nox -s "functional(python_version='pypy')"
$ nox -s  functional  # Run all versions
$ # OR
$ export PYTHONPATH=src/:functional_tests/
$ python2.7 -m pytest functional_tests/
$ python3.5 -m pytest functional_tests/
$ python3.6 -m pytest functional_tests/
$ MATPLOTLIBRC=test/ pypy -m pytest functional_tests/
$ unset PYTHONPATH

For example, the following curve-curve intersection is a functional test case:

https://raw.githubusercontent.com/dhermes/bezier/0.5.0/docs/images/curves11_and_26.png

and there is a Curve-Curve Intersection document which captures many of the cases in the functional tests.

A surface-surface intersection functional test case:

https://raw.githubusercontent.com/dhermes/bezier/0.5.0/docs/images/surfaces1Q_and_2Q.png

a segment-box functional test case:

https://raw.githubusercontent.com/dhermes/bezier/0.5.0/docs/images/test_goes_through_box08.png

and a “locate point on surface” functional test case:

https://raw.githubusercontent.com/dhermes/bezier/0.5.0/docs/images/test_surface3_and_point1.png

Functional Test Data

The curve-curve and surface-surface intersection test cases are stored in JSON files:

This way, the test cases are programming language agnostic and can be repurposed. The JSON schema for these files are stored in the functional_tests/schema directory.

Coding Style

Code is PEP8 compliant and this is enforced with flake8 and pylint.

To check compliance:

$ nox -s lint

A few extensions and overrides have been specified in the pylintrc configuration for bezier.

Docstring Style

We require docstrings on all public objects and enforce this with our lint checks. The docstrings mostly follow PEP257 and are written in the Google style, e.g.

Args:
    path (str): The path of the file to wrap
    field_storage (FileStorage): The :class:`FileStorage` instance to wrap
    temporary (bool): Whether or not to delete the file when the File
       instance is destructed

Returns:
    BufferedFileStorage: A buffered writable file descriptor

In order to support these in Sphinx, we use the Napoleon extension. In addition, the sphinx-docstring-typing Sphinx extension is used to allow for type annotation for arguments and result (introduced in Python 3.5).

Documentation

The documentation is built with Sphinx and automatically updated on RTD every time a commit is pushed to master.

To build the documentation locally:

$ nox -s docs
$ # OR (from a Python 3.5 or later environment)
$ PYTHONPATH=src/ ./scripts/build_docs.sh

Documentation Snippets

A large effort is made to provide useful snippets in documentation. To make sure these snippets are valid (and remain valid over time), doctest is used to check that the interpreter output in the snippets are valid.

To run the documentation tests:

$ nox -s doctest
$ # OR (from a Python 3.5 or later environment)
$ PYTHONPATH=src/ NO_IMAGES=True sphinx-build -W \
>   -b doctest \
>   -d docs/build/doctrees \
>   docs \
>   docs/build/doctest

Documentation Images

Many images are included to illustrate the curves / surfaces / etc. under consideration and to display the result of the operation being described. To keep these images up-to-date with the doctest snippets, the images are created as doctest cleanup.

In addition, the images in the Curve-Curve Intersection document and this document are generated as part of the functional tests.

To regenerate all the images:

$ nox -s docs_images
$ # OR (from a Python 3.5 or later environment)
$ export MATPLOTLIBRC=docs/ PYTHONPATH=src/
$ sphinx-build -W \
>   -b doctest \
>   -d docs/build/doctrees \
>   docs \
>   docs/build/doctest
$ python functional_tests/test_segment_box.py --save-plot
$ python functional_tests/test_surface_locate.py --save-plot
$ python functional_tests/make_curve_curve_images.py
$ python functional_tests/make_surface_surface_images.py
$ unset MATPLOTLIBRC PYTHONPATH

Continuous Integration

Tests are run on CircleCI and AppVeyor after every commit. To see which tests are run, see the CircleCI config and the AppVeyor config.

On CircleCI, a Docker image is used to provide fine-grained control over the environment. There is a base python-multi Dockerfile that just has the Python versions we test in. The image used in our CircleCI builds (from bezier Dockerfile) installs dependencies needed for testing (such as nox and NumPy).

Deploying New Versions

New versions are pushed to PyPI manually after a git tag is created. The process is manual (rather than automated) for several reasons:

  • The documentation and README (which acts as the landing page text on PyPI) will be updated with links scoped to the versioned tag (rather than master).
  • Several badges on the documentation landing page (index.rst) are irrelevant to a fixed version (such as the “latest” version of the package).
  • The build badges in the README and the documentation will be changed to point to a fixed (and passing) build that has already completed (will be the build that occurred when the tag was pushed). If the builds pushed to PyPI automatically, a build would need to link to itself while being run.
  • Wheels need be built for Linux, Windows and OS X. This process is becoming better, but is still scattered across many different build systems. Each wheel will be pushed directly to PyPI via twine.
  • The release will be manually pushed to TestPyPI so the landing page can be visually inspected and the package can be installed from TestPyPI rather than from a local file.

Supported Python Versions

bezier explicitly supports:

Supported versions can be found in the nox.py config.

Versioning

bezier follows semantic versioning.

It is currently in major version zero (0.y.z), which means that anything may change at any time and the public API should not be considered stable.

This library provides:

Dive in and take a look!

_images/surfaces6Q_and_7Q.png

Why Bézier?

A Bézier curve (and surface, etc.) is a parametric curve that uses the Bernstein basis:

\[b_{j, n} = \binom{n}{j} s^j (1 - s)^{n - j}\]

to define a curve as a linear combination:

\[B(s) = \sum_{j = 0}^n b_{j, n} \cdot v_j.\]

This comes from the fact that the weights sum to one:

\[b_{0, n} + b_{1, n} + \cdots + b_{n, n} = \left(s + (1 - s)\right)^n = 1.\]

This can be generalized to higher order by considering three, four, etc. non-negative weights that sum to one (in the above we have the two non-negative weights \(s\) and \(1 - s\)).

Due to their simple form, Bézier curves:

  • can easily model geometric objects as parametric curves, surfaces, etc.
  • can be computed in an efficient and numerically stable way via de Casteljau’s algorithm
  • can utilize convex optimization techniques for many algorithms (such as curve-curve intersection), since curves (and surfaces, etc.) are convex combinations of the basis

Many applications – as well as the history of their development – are described in “The Bernstein polynomial basis: A centennial retrospective”, for example;

  • aids physical analysis using finite element methods (FEM) on isogeometric models by using geometric shape functions called NURBS to represent data
  • used in robust control of dynamic systems; utilizes convexity to create a hull of curves

Installing

bezier can be installed with pip:

$ pip              install --upgrade bezier
$ python    -m pip install --upgrade bezier --user
$ python2.7 -m pip install --upgrade bezier --user
$ python3.6 -m pip install --upgrade bezier --user

bezier is open-source, so you can alternatively grab the source code from GitHub and install from source.

Getting Started

For example, to create a curve:

>>> nodes1 = np.asfortranarray([
...     [0.0, 0.0],
...     [0.5, 1.0],
...     [1.0, 0.0],
... ])
>>> curve1 = bezier.Curve(nodes1, degree=2)

The intersection (points) between two curves can also be determined:

>>> nodes2 = np.asfortranarray([
...     [0.0 ,  0.0],
...     [0.25,  2.0],
...     [0.5 , -2.0],
...     [0.75,  2.0],
...     [1.0 ,  0.0],
... ])
>>> curve2 = bezier.Curve.from_nodes(nodes2)
>>> intersections = curve1.intersect(curve2)
>>> intersections
array([[ 0.31101776, 0.42857143],
       [ 0.68898224, 0.42857143],
       [ 0.        , 0.        ],
       [ 1.        , 0.        ]])

and then we can plot these curves (along with their intersections):

>>> import matplotlib.pyplot as plt
>>> import seaborn
>>> seaborn.set()
>>>
>>> ax = curve1.plot(num_pts=256)
>>> _ = curve2.plot(num_pts=256, ax=ax)
>>> lines = ax.plot(
...     intersections[:, 0], intersections[:, 1],
...     marker='o', linestyle='None', color='black')
>>> _ = ax.axis('scaled')
>>> _ = ax.set_xlim(-0.125, 1.125)
>>> _ = ax.set_ylim(-0.0625, 0.625)
>>> plt.show()
_images/curves1_and_13.png

For API-level documentation, check out the Bézier Package documentation.

Development

To work on adding a feature or to run the functional tests, see the DEVELOPMENT doc for more information on how to get started.

License

bezier is made available under the Apache 2.0 License. For more details, see the LICENSE.