6.5 Complexity of Motion Planning

This section summarizes theoretical work that characterizes the complexity of motion planning problems. Note that this is not equivalent to characterizing the running time of particular algorithms. The existence of an algorithm serves as an upper bound on the problem’s difficulty because it is a proof by example that solving the problem requires no more time than what is needed by the algorithm. On the other hand, lower bounds are also very useful because they give an indica- tion of the difficulty of the problem itself. Suppose, for example, you are given an algorithm that solves a problem in time O(n2). Does it make sense to try to find a more efficient algorithm? Does it make sense to try to find a general-purpose motion planning algorithm that runs in time that is polynomial in the dimension? Lower bounds provide answers to questions such as this. Usually lower bounds are obtained by concocting bizarre, complicated examples that are allowed by the problem definition but were usually not considered by the person who first for- mulated the problem. In this line of research, progress is made by either raising the lower bound (unless it is already tight) or by showing that a narrower version of the problem still allows such bizarre examples. The latter case occurs often in motion planning.

6.5.1 Lower Bounds

Lower bounds have been established for a variety of motion planning problems and also a wide variety of planning problems in general. To interpret these bounds a basic understanding of the theory of computation is required [462, 891]. This fascinating subject will be unjustly summarized in a few paragraphs. A problem is

a set of instances that are each carefully encoded as a binary string. An algorithm is formally considered as a Turing machine, which is a finite-state machine that can read and write bits to an unbounded piece of tape. Other models of computation also exist, such as integer RAM and real RAM (see [118]); there are debates as to which model is most appropriate, especially when performing geometric computations with real numbers. The standard Turing machine model will be assumed from here onward. Algorithms are usually formulated to make a binary output, which involves accepting or rejecting a problem instance that is initially written to the tape and given to the algorithm. In motion planning, this amounts to deciding whether a solution path exists for a given problem instance.

Languages A language is a set of binary strings associated with a problem. It represents the complete set of instances of a problem. An algorithm is said to decide a language if in finite time it correctly accepts all strings that belong to it and rejects all others. The interesting question is: How much time or space is required to decide a language? This question is asked of the problem, under the assumption that the best possible algorithm would be used to decide it. (We can easily think of inefficient algorithms that waste resources.)

A complexity class is a set of languages that can all be decided within some specified resource bound. The class P is the set of all languages (and hence prob- lems) for which a polynomial-time algorithm exists (i.e., the algorithm runs in time O(nk) for some integer k). By definition, an algorithm is called efficient if it decides its associated language in polynomial time.7 If no efficient algorithm ex- ists, then the problem is called intractable. The relationship between several other classes that often emerge in theoretical motion planning is shown in Figure 6.40. The class NP is the set of languages that can be solved in polynomial time by a nondeterministic Turing machine. Some discussion of nondeterministic machines appears in Section 11.3.2. Intuitively, it means that solutions can be verified in polynomial time because the machine magically knows which choices to make while trying to make the decision. The class PSPACE is the set of languages that can be decided with no more than a polynomial amount of storage space during the execution of the algorithm (NPSPACE=PSPACE, so there is no nondeterministic version). The class EXPTIME is the set of languages that can be decided in time O(2nk ) for some integer k. It is known that EXPTIME is larger than P, but it is not known precisely where the boundaries of NP and PSPACE lie. It might be the case that P = NP = PSPACE (although hardly anyone believes this), or it could be that NP = PSPACE = EXPTIME.

Hardness and completeness Since an easier class is included as a subset of a harder one, it is helpful to have a notion of a language (i.e., problem) being among the hardest possible within a class. Let X refer to either P, NP, PSPACE,

7Note that this definition may be absurd in practice; an algorithm that runs in time O(n90125) would probably not be too efficient for most purposes.

Figure 6.40: It is known that P EXPTIME is a strict subset; however, it is not known precisely how large NP and PSPACE are.

or EXPTIME. A language A is called X-hard if every language B in class X is polynomial-time reducible to A. In short, this means that in polynomial time, any language in B can be translated into instances for language A, and then the decisions for A can be correctly translated back in polynomial time to correctly decide B. Thus, if A can be decided, then within a polynomial-time factor, every language in X can be decided. The hardness concept can even be applied to a language (problem) that does not belong to the class. For example, we can declare that a language A is NP-hard even if A NP (it could be harder and lie in EXPTIME, for example). If it is known that the language is both hard for some class X and is also a member of X, then it is called X-complete (i.e., NP-complete, PSPACE-complete, etc.).8 Note that because of this uncertainty regarding P, NP, and PSPACE, one cannot say that a problem is intractable if it is NP-hard or PSPACE-hard, but one can, however, if the problem is EXPTIME-hard. One additional remark: it is useful to remember that PSPACE-hard implies NP-hard.

/∈

Lower bounds for motion planning The general motion planning problem, Formulation 4.1, was shown in 1979 to be PSPACE-hard by Reif [817]. In fact, the problem was restricted to polyhedral obstacles and a finite number of polyhedral robot bodies attached by spherical joints. The coordinates of all polyhedra are

assumed to be in Q (this enables a finite-length string encoding of the problem instance). The proof introduces a fascinating motion planning instance that in-

volves many attached, dangling robot parts that must work their way through a complicated system of tunnels, which together simulates the operation of a sym- metric Turing machine. Canny later established that the problem in Formulation

4.1 (expressed using polynomials that have rational coefficients) lies in PSPACE [173]. Therefore, the general motion planning problem is PSPACE-complete.

Many other lower bounds have been shown for a variety of planning problems. One famous example is the Warehouseman’s problem shown in Figure 6.41. This

8If you remember hearing that a planning problem is NP-something, but cannot remember whether it was NP-hard or NP-complete, then it is safe to say NP-hard because NP-complete implies NP-hard. This can similarly be said for other classes, such as PSPACE-complete vs. PSPACE-hard.

Figure 6.41: Even motion planning for a bunch of translating rectangles inside of a rectangular box in R2 is PSPACE-hard (and hence, NP-hard).

problem involves a finite number of translating, axis-aligned rectangles in a rect- angular world. It was shown in [461] to be PSPACE-hard. This example is a beautiful illustration of how such a deceptively simple problem formulation can lead to such a high lower bound. More recently, it was even shown that planning for Sokoban, which is a warehouseman’s problem on a discrete 2D grid, is also PSPACE-hard [255]. Other general motion planning problems that were shown to be PSPACE-hard include motion planning for a chain of bodies in the plane

[460, 490] and motion planning for a chain of bodies among polyhedral obsta- cles in R3. Many lower bounds have been established for a variety of extensions and variations of the general motion planning problem. For example, in [172] it

was established that a certain form of planning under uncertainty for a robot in a 3D polyhedral environment is NEXPTIME-hard, which is harder than any of the classes shown in Figure 6.40; the hardest problems in this NEXPTIME are believed to require doubly exponential time to solve.

The lower bound or hardness results depend significantly on the precise repre- sentation of the problem. For example, it is possible to make problems look easier by making instance encodings that are exponentially longer than they should be. The running time or space required is expressed in terms of n, the input size. If the motion planning problem instances are encoded with exponentially more bits than necessary, then a language that belongs to P is obtained. As long as the instance encoding is within a polynomial factor of the optimal encoding (this can be made precise using Kolmogorov complexity [630]), then this bizarre behavior is avoided. Another important part of the representation is to pay attention to how parameters in the problem formulation can vary. We can redefine motion

planning to be all instances for which the dimension of is never greater than 21000. The number of dimensions is sufficiently large for virtually any application. The resulting language for this problem belongs to P because cylindrical algebraic decomposition and Canny’s algorithm can solve any motion planning problem in polynomial time. Why? This is because now the dimension parameter in the time-complexity expressions can be replaced by 21000, which is a constant. This formally implies that an efficient algorithm is already known for any motion plan- ning problem that we would ever care about. This implication has no practical value, however. Thus, be very careful when interpreting theoretical bounds.

C

The lower bounds may appear discouraging. There are two general directions to go from here. One is to weaken the requirements and tolerate algorithms that yield some kind of resolution completeness or probabilistic completeness. This approach was taken in Chapter 5 and leads to many efficient algorithms. An- other direction is to define narrower problems that do not include the bizarre constructions that lead to bad lower bounds. For the narrower problems, it may be possible to design interesting, efficient algorithms. This approach was taken for the methods in Sections 6.2 and 6.3. In Section 6.5.3, upper bounds for some algorithms that address these narrower problems will be presented, along with bounds for the general motion planning algorithms. Several of the upper bounds involve Davenport-Schinzel sequences, which are therefore covered next.

      1. Davenport-Schinzel Sequences

Davenport-Schinzel sequences provide a powerful characterization of the structure that arises from the lower or upper envelope of a collection of functions. The lower envelope of five functions is depicted in Figure 6.42. Such envelopes arise in many problems throughout computational geometry, including many motion planning problems. They are an important part of the design and analysis of many modern algorithms, and the resulting algorithm’s time complexity usually involves terms that follow directly from the sequences. Therefore, it is worthwhile to understand some of the basics before interpreting some of the results of Section 6.5.3. Much more information on Davenport-Schinzel sequences and their applications appears in [866]. The brief introduction presented here is based on [865].

For positive integers n and s, an (n, s) Davenport-Schinzel sequence is a se- quence (u1, . . . , um) composed from a set of n symbols such that:

        1. The same symbol may not appear consecutively in the sequence. In other words, ui /= ui+1 for any i such that 1 ≤ i < m.
        2. The sequence does not contain any alternating subsequence that uses two symbols and has length s + 2. A subsequence can be formed by deleting any

elements in the original sequence. The condition can be expressed as: There do not exist s + 2 indices i1 < i2 < · · · < is+2 for which u_i_1 = u_i_3 = u_i_5 = a

and u_i_2 = u_i_4 = u_i_6 = b, for some symbols a and b.

f_1(_x)

f_2(_x)

f_3(_x)

f_4(_x)

f_5(_x)

X

Figure 6.42: The lower envelope of a collection of functions.

As an example, an (n, 3) sequence cannot appear as (a · · · b · · · a · · · b · · · a), in which each is filled in with any sequence of symbols. Let λs(n) denote the maximum possible length of an (n, s) Davenport-Schinzel sequence.

· · ·

The connection between Figure 6.42 and these sequences can now be explained. Consider the sequence of function indices that visit the lower envelope. In the example, this sequence is (5, 2, 3, 4, 1). Suppose it is known that each pair of functions intersects in at most s places. If there are n real-valued continuous functions, then the sequence of function indices must be an (n, s) Davenport- Schinzel sequence. It is amazing that such sequences cannot be very long. For a fixed s, they are close to being linear.

The standard bounds for Davenport-Schinzel sequences are [865]9

λ1(n) = n (6.32)
λ2(n) = 2n − 1 (6.33)
λ3(n) = Θ(nα(n)) (6.34)

λ4(n) = Θ(n · 2 ) (6.35)

α(n)

λ2s

α(n) +C_2_s(n)

s−1

(n) ≤ n · 2 (6.36)

λ2s+1(n) ≤ n · 2

2s+1

α(n)s−1 lg α(n)+C′ (n)

(6.37)

1 α(n)s−1+C′ (n)

λ2s(n) = Ω(n · 2 (s−1)!

2s ). (6.38)

In the expressions above Cr(n) and Cr′ (n) are terms that are smaller than their leading exponents. The α(n) term is the inverse Ackerman function, which is an extremely slow-growing function that appears frequently in algorithms. The Ack- erman function is defined as follows. Let A1(m) = 2m and An+1(m) represent m

9The following asymptotic notion is used: O(f (n)) denotes an upper bound, Ω(f (n)) denotes a lower bound, and Θ(f (n)) means that the bound is tight (both upper and lower). This notation is used in most books on algorithms [243].

applications of An. Thus, A1(m) performs doubling, A2(m) performs exponentia- tion, and A3(m) performs tower exponentiation, which makes a stack of 2’s,

2

.2

22.. , (6.39)

that has height m. The Ackerman function is defined as A(n) = An(n). This function grows so fast that A(4) is already an exponential tower of 2’s that has height 65536. Thus, the inverse Ackerman function, α, grows very slowly. If n is less than or equal to an exponential tower of 65536 2’s, then α(n) 4. Even when it appears in exponents of the Davenport-Schinzel bounds, it does not represent a significant growth rate.

Example 6.9 (Lower Envelope of Line Segments) One interesting applica- tion of Davenport-Schinzel applications is to the lower envelope of a set of line

segments in R2. Since segments in general position may appear multiple times along the lower envelope, the total number of edges is Θ(λ3(n)) = Θ(nα(n)), which is higher than one would obtain from infinite lines. There are actually ar-

rangements of segments in R2 that reach this bound; see [866]. .

      1. Upper Bounds

The upper bounds for motion planning problems arise from the existence of com- plete algorithms that solve them. This section proceeds by starting with the most general bounds, which are based on the methods of Section 6.4, and concludes with bounds for simpler motion planning problems.

General algorithms The first upper bound for the general motion planning problem of Formulation 4.1 came from the application of cylindrical algebraic

decomposition [852]. Let n be the dimension of C. Let m be the number of

polynomials in , which are used to define obs. Recall from Section 4.3.3 how quickly this grows for simple examples. Let d be the maximum degree among the polynomials in . The maximum degree of the resulting polynomials is bounded

F

F C

2n 3

−1 n−1

by O(d ), and the total number of polynomials is bounded by O((md) ). The

total running time required to use cylindrical algebraic decomposition for motion planning is bounded by (md)O(1)n .10 Note that the algorithm is doubly exponential in dimension n but polynomial in m and d. It can theoretically be declared to be efficient on a space of motion planning problems of bounded dimension (although, it certainly is not efficient for motion planning in any practical sense).

∈ ∞

·

n

10It may seem odd for O( ) to appear in the middle of an expression. In this context, it means that there exists some c [0, ) such that the running time is bounded by (md)c . Note that another O is not necessary in front of the whole formula.

Since the general problem is PSPACE-complete, it appears unavoidable that a complete, general motion planning algorithm will require a running time that is exponential in dimension. Since cylindrical algebraic decomposition is dou- bly exponential, it led many in the 1980s to wonder whether this upper bound could be lowered. This was achieved by Canny’s roadmap algorithm, for which the running time is bounded by mn(lg m)dO(n_4). Hence, it is singly exponential, which appears very close to optimal because it is up against the lower bound that seems to be implied by PSPACE-hardness (and the fact that problems exist that require a roadmap with (md)_n connected components [77]). Much of the algo- rithm’s complexity is due to finding a suitable deterministic perturbation to put the input polynomials into general position. A randomized algorithm can alter- natively be used, for which the randomized expected running time is bounded by mn(lg m)dO(n_2). For a **_randomized algorithm [719], the randomized expected running time is still a worst-case upper bound, but averaged over random “coin tosses” that are introduced internally in the algorithm; it does not **reflect any kind of average over the expected input distribution. Thus, these two bounds represent the best-known upper bounds for the general motion planning problem. Canny’s algorithm may also be applied to solve the kinematic closure problems of Section 4.4, but the complexity does not reflect the fact that the dimension, k, of the algebraic variety is less than n, the dimension of . A roadmap algorithm that is particularly suited for this problem is introduced in [76], and its running time is bounded by mk+1dO(_n_2). This serves as the best-known upper bound for the problems of Section 4.4.

C

Specialized algorithms Now upper bounds are summarized for some narrower problems, which can be solved more efficiently than the general problem. All of the problems involve either two or three degrees of freedom. Therefore, we expect that the bounds are much lower than those for the general problem. In many cases, the Davenport-Schinzel sequences of Section 6.5.2 arise. Most of the bounds presented here are based on algorithms that are not practical to implement; they mainly serve to indicate the best asymptotic performance that can be obtained for a problem. Most of the bounds mentioned here are included in [865].

Consider the problem from Section 6.2, in which the robot translates in W = R2

and obs is polygonal. Suppose that is a convex polygon that has k edges and

C A O

is the union of m disjoint, convex polygons with disjoint interiors, and their total number of edges is n. In this case, the boundary of free (computed by Minkowski difference; see Section 4.3.2) has at most 6m 12 nonreflex vertices (interior angles less than π) and n + km reflex vertices (interior angles greater than π). The free

C

space, Cfree, can be decomposed and searched in time O((n+ km) lg2 n) [518, 865]. Using randomized algorithms, the bound reduces to O((n+km)·2 lg n) random- ized expected time. Now suppose that A is a single nonconvex polygonal region described by k edges and that O is a similar polygonal region described by n edges. The Minkowski difference could yield as many as Ω(k2n2) edges for Cobs. This can be avoided if the search is performed within a single connected component of Cfree.

α(n)

Based on analysis that uses Davenport-Schinzel sequences, it can be shown that the worst connected component may have complexity Θ(knα(k)), and the planning problem can be solved in time O(kn lg2 n) deterministically or for a randomized algorithm, O(kn 2 lg n) randomized expected time is needed. More generally, if obs consists of n algebraic curves in R2, each with degree no more than d, then the motion planning problem for translation only can be solved deterministically in time O(λs+2(n) lg2 n), or with a randomized algorithm in O(λs+2(n) lg n) ran- domized expected time. In these expressions, λs+2(n) is the bound (6.37) obtained

C

α(n)

·

from the (n, s + 2) Davenport-Schinzel sequence, and s ≤ d .

2

For the case of the line-segment robot of Section 6.3.4 in an obstacle region described with n edges, an O(n5)-time algorithm was given. This is not the best possible running time for solving the line-segment problem, but the method is easier to understand than others that are more efficient. In [748], a roadmap algorithm based on retraction is given that solves the problem in O(n2 lg n lg∗ n) time, in which lg∗ n is the number of times that lg has to be iterated on n to yield a result less than or equal to 1 (i.e., it is a very small, insignificant term; for practical purposes, you can imagine that the running time is O(n2 lg n)). The tightest known upper bound is O(n2 lg n) [625]. It is established in [517] that there exist examples for which the solution path requires Ω(n2) length to encode. For

the case of a line segment moving in R3 among polyhedral obstacles with a total of n vertices, a complete algorithm that runs in time O(n4 + ǫ) for any ǫ > 0 was given in [547]. In [517] it was established that solution paths of complexity Ω(n4)

exist.

Now consider the case for which C = SE(2), A is a convex polygon with k

edges, and O is a polygonal region described by n edges. The boundary of Cfree

has no more than O(knλ6(kn)) edges and can be computed to solve the motion planning problem in time O(knλ6(kn) lg kn) [10, 11]. An algorithm that runs in time O(k4nλ3(n) lg n) and provides better clearance between the robot and obstacles is given in [205]. In [55] (some details also appear in [588]), an algorithm is presented, and even implemented, that solves the more general case in which

A is nonconvex in time O(k n lg(kn)). The number of faces of Cobs could be as

3

3

high as Ω(k3n3) for this problem. By explicitly representing and searching only one

connected component, the best-known upper bound for the problem is O((kn)2+ǫ), in which ǫ > 0 may be chosen arbitrarily small [428].

In the final case, suppose that translates in = R3 to yield = R3. For a polyhedron or polyhedral region, let its complexity be the total number of

A W C

faces, edges, and vertices. If A is a polyhedron with complexity k, and O is a

polyhedral region with complexity n, then the boundary of free is a polyhedral surface of complexity Θ(k3n3). As for other problems, if the search is restricted to a single component, then the complexity is reduced. The motion planning problem in this case can be solved in time O((kn)2+ǫ) [42]. If is convex and there are m convex obstacles, then the best-known bound is O(kmn lg2 m) time. More generally, if obs is bounded by n algebraic patches of constant maximum degree, then a vertical decomposition method solves the motion planning problem

C

A

C

within a single connected component of Cfree in time O(n ).

2+ǫ

Further Reading

Most of the literature on combinatorial planning is considerably older than the sampling- based planning literature. A nice collection of early papers appears in [854]; this includes [460, 748, 749, 817, 851, 852, 853]. The classic motion planning textbook of Latombe

[588] covers most of the methods presented in this chapter. The coverage here does not follow [588], which makes separate categories for cell decomposition methods and roadmap methods. A cell decomposition is constructed to produce a roadmap; hence, they are unified in this chapter. An excellent reference for material in combinatorial algorithms, computational geometry, and complete algorithms for motion planning is the collection of survey papers in [403].

Section 6.2 follows the spirit of basic algorithms from computational geometry. For a gentle introduction to computational geometry, including a nice explanation of vertical composition, see [264]. Other sources for computational geometry include [129, 302, 806]. To understand the difficulties in computing optimal decompositions of polygons, see [757]. See [650, 709, 832] for further reading on computing shortest paths.

Cell decompositions and cell complexes are very important in computational geom- etry and algebraic topology. Section 6.3 provided a brief perspective that was tailored to motion planning. For simplicial complexes in algebraic topology, see [496, 535, 834]; for singular complexes, see [834]. In computational geometry, various kinds of cell de-

compositions arise. Some of the most widely studied decompositions are triangulations

  1. and arrangements [426], which are regions generated by a collection of primitives,

such as lines or circles in the plane. For early cell decomposition methods in motion

planning, see [854]. A survey of computational topology appears in [954].

The most modern and complete reference for the material in Section 6.4 is [77]. A gentle introduction to computational algebraic geometry is given in [250]. For details regarding algebraic computations with polynomials, see [704]. A survey of computational algebraic geometry appears in [705]. In addition to [77], other general references to cylindrical algebraic decomposition are [40, 232]. For its use in motion planning, see [588, 852]. The main reference for Canny’s roadmap algorithm is [173]. Alternative high- level overviews to the one presented in Section 6.4.3 appear in [220, 588]. Variations and improvements to the algorithm are covered in [77]. A potential function-based extension of Canny’s roadmap algorithm is developed in [176].

For further reading on the complexity of motion planning, consult the numerous references given in Section 6.5.

Exercises

    1. Extend the vertical decomposition algorithm to correctly handle the case in which

Cobs has two or more points that lie on the same vertical line. This includes the

case of vertical segments. Random perturbations are not allowed.

    1. Fully describe and prove the correctness of the bitangent computation method shown in Figure 6.14, which avoids trigonometric functions. Make certain that all types of bitangents (in general position) are considered.

Figure 6.43: Determine the cylindrical algebraic decomposition obtained by pro- jecting onto the x-axis.

    1. Develop an algorithm that uses the plane-sweep principle to efficiently compute a representation of the union of two nonconvex polygons.
    2. Extend the vertical cell decomposition algorithm of Section 6.2.2 to work for ob- stacle boundaries that are described as chains of circular arcs and line segments.
    3. Extend the shortest-path roadmap algorithm of Section 6.2.4 to work for obstacle boundaries that are described as chains of circular arcs and line segments.
    4. Derive the equation for the Conchoid of Nicomedes, shown in Figure 6.24, for the case of a line-segment robot contacting an obstacle vertex and edge simultaneously.
    5. Propose a resolution-complete algorithm for motion planning of the line-segment robot in a polygonal obstacle region. The algorithm should compute exact C-space obstacle slices for any fixed orientation, θ; however, the algorithm should use van der Corput sampling over the set [0, 2π) of orientations.
    6. Determine the result of cylindrical algebraic decomposition for unit spheres S1, S2, S3, S4, . . .. Each Sn is expressed as a unit sphere in Rn+1. Graphically depict the cases of S1 and S2. Also, attempt to develop an expression for the number of cells as a function of n.
    7. Determine the cylindrical algebraic decomposition for the three intersecting circles shown in Figure 6.43. How many cells are obtained?
    8. Using the matrix in (6.28), show that the result of Canny’s roadmap for the torus, shown in Figure 6.39, is correct. Use the torus equation

(x2 + x2 + x2 − (r2 + r2))2 − 4r2(r2 − x2) = 0, (6.40)

1

2

3

1

2

1

2

3

in which r1 is the major circle, r2 is the minor circle, and r1 > r2.

    1. Propose a vertical decomposition algorithm for a polygonal robot that can trans- late in the plane and even continuously vary its scale. How would the algorithm be modified to instead work for a robot that can translate or be sheared?
    2. Develop a shortest-path roadmap algorithm for a flat torus, defined by identifying opposite edges of a square. Use Euclidean distance but respect the identifications when determining the shortest path. Assume the robot is a point and the obstacles are polygonal.

Implementations

    1. Implement the vertical cell decomposition planning algorithm of Section 6.2.2.
    2. Implement the maximum-clearance roadmap planning algorithm of Section 6.2.3.
    3. Implement a planning algorithm for a point robot that moves in W = R3 among polyhedral obstacles. Use vertical decomposition.
    4. Implement an algorithm that performs a cylindrical decomposition of a polygonal obstacle region.
    5. Implement an algorithm that computes the cell decomposition of Section 6.3.4 for the line-segment robot.
    6. Experiment with cylindrical algebraic decomposition. The project can be greatly facilitated by utilizing existing packages for performing basic operations in com- putational algebraic geometry.
    7. Implement the algorithm proposed in Exercise 7.

results matching ""

    No results matching ""