Rational Matrices: Technical Details
Wolfgang Scherrer and Bernd Funovits
2026-01-22
b_technical_details_ratm.RmdIntroduction
This vignette provides technical implementation details and
mathematical theory underlying the rationalmatrices
package. It complements the main user guide (Rational Matrices vignette)
with in-depth coverage of algorithms, numerical methods, and theoretical
foundations.
Topics covered:
- Left Coprime Polynomials - Part of Analysis & Properties (#5)
- Column Reduced Matrix - Supporting theory for Normal Forms (#4)
- Realization Algorithms - Deep dive into Realization Algorithms (#2)
- Operations with State-Space Realizations - Theory for State-Space Tools (#6)
- Schur, QZ, Lyapunov - Numerical Methods (#7)
- Reflect Poles and Zeroes - Theory for Pole/Zero Reflection (#8)
For implementation details of these topics, see the source files listed in CLAUDE.md and the main vignette.
Left Coprime Polynomials
Functional Area: Analysis & Properties (#5)
See R/is_methods.R for implementation.
A polynomial matrix is called left prime, if has full row rank everywhere in the complex plane. Clearly this implies that is square or “wide”, i.e. if is -dimensional then must hold.
A pair of (compatible) polynomial matrices is called left coprime if the matrix is left prime. This case is important for the structure of left matrix fraction descriptions. Suppose , where is a square, non singular polynomial matrix. If the pair is is not left coprime, then we may cancel a common, non unimodular, factor and thus obtain a “simpler” representation for .
We discuss two strategies for testing whether a polynomial matrix is left prime. The first approach uses a Hermite normal form (HNF) and the second one tackles this problem via a (singular) pencil.
The approach via singular pencils is numerically stable and thus is
implemented in the function is.coprime(). See also the
examples below.
First note that the problem is easy to tackle in the case that has degree zero. Here we just have to check the left kernel of the coefficient matrix of . Also the case where is “tall”, i.e. the number of rows is larger than the number of columns is trivial. In this case clearly cannot be left prime, since the rank of is smaller than the number of rows for any .
Therefore throughout this section, we assume that is an -dimensional polynomial matrix with degree and .
Hermite Normal Form
Consider the row Hermite form obtained from elementary column operations where is an -dimensional unimodular modular matrix, is an -dimensional lower (quasi) triangular matrix.
The -dimensional matrix is obtained from by selecting the first columns and consists of the first rows of . Note that is a square, lower triangular matrix, where the diagonal elements are either monic or zero, and if a diagonal element is non zero, then all elements to the left of this element have a lower degree.
Furthermore note that the rank of is equal to the rank of for all . Therefore the rank of is less than if and only is a zero of the product of the diagonal entries: . The following cases may occur
- All diagonal elements are constant (and hence must hold). In this case for all and thus is coprime.
- One of the diagonal elements is zero: In this case holds for all and thus is not coprime.
- The diagonal elements are non zero and at least one of them has a degree larger than zero. In this case except for the zeroes of the polynomial . Also in this case, is not coprime.
Hence, the polynomial is left prime if and only if .
Generalized Pencils
As reference for generalized pencils, see
- (Gantmacher 1959, vol. 2, chap. XII (in particular §4 on page 35ff))
- (Kailath 1980, 393ff)
- (Demmel and Kågström 1993)
We first construct a pencil such that for any the left kernel of and of have the same dimension. This means that is left prime if and only if is left prime. Furthermore the zeroes of and of are the same.
The condition for and is equivalent to the condition that for , and for . This follows immediately be rewriting the above condition as see also Kailath (1980), page 393ff. Combining the last two equations gives
Therefore the matrix
has a non trivial left kernel if and only if the pencil
,
has a non trivial left kernel. This
implies
is left prime if and only if
is left prime.
Staircase Form
In order to analyze the left kernel this pencil, we transform the pencil (by multiplication with orthogonal matrices from the left and the right) into a so called staircase form. To simplify notation we set and , i.e. now denotes the dimension of the pencil . Note that in general, which means that we have to deal with non regular pencils.
The procedure described below works for an arbitary pencil, i.e. we do not impose the above special structure and we also consider the case .
If the pencil is “tall”, i.e. , then has a non trivial left kernel for all .
In this case (and the matrix ) is not left prime.If the pencil is square, i.e. , and is non singular, then the zeroes of the pencil (and the zeroes of ) are the eigenvalues of the matrix .
In this case (and the matrix ) is not left prime.
Now suppose that the right kernel of is dimensional. If , i.e. if , then two cases may occur:
- The matrix
has full row rank.
In this case the pencil (and the matrix ) is left prime. - The matrix
does not have full row rank and hence
has a non trivial left kernel for all
.
In this case (and the matrix ) is not left prime.
Finally suppose that holds and let denote the rank of the first columns of . Then there exists an orthogonal transformation, say, such that the first rows of the first columns of have full rank and the remaining rows are zero.
Using the same symbols for the transformed pencil we get Clearly has full row rank if and only if the reduced pencil has full row rank.
Now we iterate this procedure with the reduced pencil until we end up with the following staircase form of the pencil: The diagonal blocks are of dimension . The first diagonal blocks of have full row rank (and hence ) and the first diagonal block of are zero. Therefore has full row rank if and only if the last block has full row rank. For this last block the following cases may occur:
- If
is “tall”
(),
then
has a non trivial left kernel for all
.
In this case (and the matrix ) is not left prime. - If the pencil
is square
()
and
is non singular, then the zeroes of the pencil
(and the zeroes of
)
are the eigenvalues of the matrix
.
In this case (and the matrix ) is not left prime. - If
is zero and the matrix
has full row rank, then
has full row rank for all
.
In this case the pencil (and the matrix ) is left prime. - If
is zero and the matrix
does not have full row rank, then
has a non trivial left kernel for all
.
In this case (and the matrix ) is not left prime.
The function is.coprime() uses this approach to test
wether a polynomial is left prime (or a pair of polynomials is left
coprime). In the default case the function just returns a boolean
variable. However, if the optional argument only.answer is
set to FALSE, then a list is returned which contains the
above described staircase form. The list contains the following
slots:
answer is TRUE, if the polynomial is prime
(the pair is left coprime).A,B hold the two matrices
and
(in staircase form).m,n are two integer vectors which contain the
size of the blocks
and
respectively.zeroes is a vector which contains the zeroes of the pencil
(the polynomial
).
In the (co-)prime case this vector is empty and if
is rank deficient for all
then zeroes = NA_real_.
Examples
We present three examples pertaining to the three different outcomes:
- coprime polynomial matrices and
- non-coprime polynomial matrices with a finite number of common zeros
- non-coprime polynomial matrices with reduced rank for all
Coprime Polynomials
Two “generic” polynomials are coprime.
# Generate a random (m x m), and a random (m x n) polynomial matrix, with degree p=2
m = 2
n = 3
set.seed(1803)
a = test_polm(dim = c(m,m), degree = 2, digits = 1, random = TRUE)
b = test_polm(dim = c(m,n), degree = 2, digits = 1, random = TRUE)Hermite Form
The Hermite form of is
HNF = hnf(cbind(a,b), from_left = FALSE)
print(HNF$h, digits = 2, format = 'character')
#> ( 2 x 5 ) matrix polynomial with degree <= 0
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 0 0 0 0
#> [2,] 0 1 0 0 0The square submatrix of the Hermite form is equal to the identity matrix, hence are left coprime.
Pencil
The method using the pencil gives the same answer:
- There are no common zeros.
- The polynomial matrices are left coprime.
out = is.coprime(a, b, debug = FALSE, only.answer = FALSE)
cat("The polynomials are left coprime: ", out$answer,"\n",
"The zeros of the pencil are: ", out$zeroes, sep = '')
#> The polynomials are left coprime: TRUE
#> The zeros of the pencil are:Not coprime: Finite Number of Zeros
Here we consider an example where is rank deficient for some (but not all) . We generate these polynomials by multiplying them with a common (matrix) factor. We show that (at least) one singular value of evaluated at the zeros of the common factor is zero.
a0 = a
b0 = b
# Generate random common factor with degree 1
r = test_polm(dim = c(m,m), degree = 1, digits = 1, random = TRUE)
# Generate polynomials with a common factor
a = r %r% a0
b = r %r% b0
c = cbind(a,b)
z_r = zeroes(r)
cat("The zeros of the common factor r(z) are: \n", z_r, "\n\n")
#> The zeros of the common factor r(z) are:
#> -0.2051932 5.377607
d = svd(unclass(zvalues(c, z = z_r[1]))[,,1])$d
cat("minimal singular value of c(z_0): ", d[m], sep="")
#> minimal singular value of c(z_0): 1.007532e-16Hermite Form
The Hermite Form of is
HNF = hnf(cbind(a,b), from_left = FALSE)
print(HNF$h, digits = 2, format = 'character')
#> ( 2 x 5 ) matrix polynomial with degree <= 2
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] 1 0 0 0 0
#> [2,] 2.49 - 0.39z -1.1 - 5.17z + z^2 0 0 0We calculate the zeros of by calculating the zeros of the diagonal elements of the square submatrix . Note that for this example the element is equal to .
The (common) zeros are therefore
zeroes(HNF$h[m,m])
#> [1] -0.2051932 5.3776070Note that these zeroes are identical to the zeroes of the common factor .
Pencil
The outcome for the pencil is the same as above.
out = is.coprime(a, b, debug = FALSE, only.answer = FALSE)
cat("The polynomials are left coprime: ", out$answer,"\n",
"The zeros of the pencil are: ", out$zeroes, sep = '')
#> The polynomials are left coprime: FALSE
#> The zeros of the pencil are: 5.377607-0.2051932Not coprime: Everywhere Rank Deficient
Finally let us consider the case, where is rank deficient for all . We generate a pair of polynomial matrices of this kind by multiplying the two polynomials (from left) with a singular common factor.
To verify that is of reduced row rank for all , we print the singular values of at a randomly selected point.
# generate a square polynomial matrix with rank one!
r = test_polm(dim = c(m,1), degree = 1, digits = 1, random = TRUE) %r%
test_polm(dim = c(1,m), degree = 1, digits = 1, random = TRUE)
print(r, format = 'c')
#> ( 2 x 2 ) matrix polynomial with degree <= 2
#> [,1] [,2]
#> [1,] 1.36 - 1.43z - 2.1z^2 -0.72 + 2.07z - 1.35z^2
#> [2,] 1.53 + 3.47z + 1.82z^2 -0.81 - 0.36z + 1.17z^2
a = r %r% a0
b = r %r% b0
# Evaluate c(z) = (a(z),b(z)) at a random point z0 and print the corresponding singular values
z0 = complex(real = rnorm(1), imaginary = rnorm(1))
d = svd( unclass(zvalues(cbind(a,b), z = z0))[,,1], nu = 0, nv = 0 )$d
cat("The singular values of c(z) \n evaluated at z0 = ", z0, "\n are: ", d, "\n\n")
#> The singular values of c(z)
#> evaluated at z0 = 0.3368112-0.2077692i
#> are: 8.087889 6.256749e-16Hermite Form
The Hermite Form of is
HNF = hnf(cbind(a,b), from_left = FALSE)
print(HNF$h, digits = 1, format = 'char')
#> ( 2 x 5 ) matrix polynomial with degree <= 1
#> [,1] [,2] [,3] [,4] [,5]
#> [1,] -0.5 + z 0 0 0 0
#> [2,] -0.6 - 0.9z 0 0 0 0The matrix
is rank deficient (for all
)
since
!
The procedure hnf returns also an estimate of the rank of
(when
is considered as rational matrix). Here we get
cat(HNF$rank,'\n')
#> 1which again means that is rank deficient for all .
Pencil
The result for pencils is the same.
out = is.coprime(a, b, debug = FALSE, only.answer = FALSE)
cat("The polynomials are left coprime: ", out$answer,"\n",
"The zeros of the pencil are: ", out$zeroes, sep = '')
#> The polynomials are left coprime: FALSE
#> The zeros of the pencil are: NAColumn Reduced Matrix
Functional Area: Polynomial Manipulation (#4)
See R/polm_methods.R for implementation.
Note: The functions degree
(col_end_matrix) and col_reduce use different
strategies to compute the column degrees (column end matrix).
- The first two consider the elements, whereas
-
col_reduceconsiders the euclidean norm of the respective columns.
Let
,
and
.
The function degree sets the degree of the
-th
column to
iff
for at least one
and
for all
and
.
The column end matrix is constructed correspondingly. The function
col_reduce sets the degree of the
-th
column to
iff
and
for all
.
Therefore one may get different results!
The basic strategy to construct a column reduced matrix (for the case that is square and non singular) is as follows. First permute the columns of such that the column degrees are ordered. Now suppose that the -th column, say, of the column end matrix is linearly dependent from the previous columns (, ), i.e. . Then we substract from the -th column of , times the -th column. This operation reduces the degree of the -th column by one.
This procedure is repeated until the column end matrix is regular.
See also Wolovich (1974).
The coefficients are determined from an SVD of the column end matrix.
Realization Algorithms
Functional Area: Realization Algorithms (#2)
See R/as_methods.R and related functions like
pseries2stsp(), pseries2lmfd() for
implementation.
Impulse Response
One of the main features of the rational functions is that the Hankel matrix of the impulse response coefficients has finite rank:
Note that the impulse response function is only well defined if the rational matrix has no pole at .
LMFD
We consider a -dimensional rational matrix in LMFD form where and . W.l.o.g we set . This implies and thus By the above identity it follows that the Hankel matrix has finite rank and that the coefficients of the polynomial are closely related to the left kernel of . In the following we discuss, how to construct a unique LMFD of the rational matrix via the above identity.
In order to describe the linear dependence structure of the rows of it is convenient to use a “double” index for the rows: Let denote the -th row in the -th block row of , i.e. is the -th row of .
A selection of rows of the Hankel matrix is called a nice selection, if there are no “holes” in the sense that , implies that . Nice selections may be described by a multi-index , where .
Suppose that has rank . In general there are many different selections of rows of which form a basis for the row space of . In the following we choose the first rows of which form a basis of the row space and denote the corresponding selection with . Due to the Hankel structure of this is a nice selection in the above sense. The corresponding ’s are called Kronecker indices of the Hankel matrix (respectively of the rational matrix ). Note that the sum of the Kronecker indices is equal to the rank of : .
The row
is linearly dependent on the previous (basis) rows and therefore
holds for suitably chosen coefficients
.
The sum runs over all basis rows previous to the row
,
i.e. the notation
means
()
or
(
and
).
This equation now (uniquely) defines a polynomial matrix
,
with
.
Let
denote the
-th
entry of the matrix
.
The entries
,
for
and
are read off from the above equation,
is set to one
()
and all other entries are set to zero.
By construction is a polynomial (with degree less than or equal to ). In the last step we set
By the above construction one gets a unique LMFD representation of the rational matrix with the following properties:
- is a lower triangular matrix with ones on the diagonal.
- . In particular for a square rational matrix with we have .
- the row degrees of
are equal to
,
- the elements are divisible by , where for and for .
- the pair is left coprime and row reduced.
A pair which satisfied the above conditions is said to be in echelon canonical form.
Ho-Kalman
A quite analogous strategy may be used to construct a (unique) state space realization of a rational matrix. Suppose that the Hankel matrix has rank and that is such that is a basis for the row space of . Then a state space representation of the rational matrix is obtained by a solving the following equations where
The obtained result clearly depends on the matrix , i.e. on the choice of the basis of the row space of .
Two choices are implemented:
- echelon form: here is the selection matrix which selects the first linearly independent rows of . The statespace realization obtained then is in echelon canonical form.
- balanced form: here is obtained via an SVD decomposition of a finite dimensional sub matrix of . For more details, see …
Operations with State-Space Realizations
Functional Area: State-Space Tools (#6)
See R/arithmetic_methods.R and
R/stsp_methods.R for implementation.
Addition
$$ \left[\begin{array}{@{}cc|c@{}} A_1 & 0 & B_1 \\ 0 & A_2 & B_2 \\ \hline C_1 & C_2 & D_1 + D_2 \end{array}\right] $$
See Ops.ratm() in particular a + b.
Multiplication
$$ \left[\begin{array}{@{}cc|c@{}} A_1 & B_1 C_2 & B_1 D_2 \\ 0 & A_2 & B_2 \\ \hline C_1 & D_1 C_2 & D_1 D_2 \end{array}\right] $$
See a %r% b.
Inverse
$$ \left[\begin{array}{@{}c|c@{}} A - B D^{-1} C & BD^{-1} \\ \hline -D^{-1}C & D^{-1} \end{array}\right] $$
See Ops.ratm() in particular x^{-1}.
Bind by columns or rows
$$
\left[\begin{array}{@{}cc|cc@{}}
A_1 & 0 & B_1 & 0 \\
0 & A_2 & 0 & B_2 \\ \hline
C_1 & C_2 & D_1 & D_2
\end{array}\right]
$$ See cbind.ratm().
$$
\left[\begin{array}{@{}cc|c@{}}
A_1 & 0 & B_1 \\
0 & A_2 & B_2 \\ \hline
C_1 & 0 & D_1 \\
0 & C_2 & D_2
\end{array}\right]
$$ See rbind.ratm().
Elementwise Multiplication
First consider the product of two matrices, and write the elementwise product as the product of a suitable diagonal matrix with the second factor
$$ \left[\begin{array}{@{}cccc|c@{}} A_1 & \cdots & 0 & B_1 C_{2,1} & B_1 D_{2,1} \\ \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & \cdots & A_1 & B_1 C_{2,m} & B_1 D_{2,m} \\ 0 & \cdots & 0 & A_2 & B_2 \\ \hline C_{1,1} & \cdots & 0 & D_{1,1} C_{2,1} & D_{1,1} C_{2,1} \\ \vdots & \ddots & \vdots & \vdots & \vdots \\ 0 & \cdots & C_{1,m} & D_{1,m} C_{2,m} & D_{1,m} D_{2,m} \end{array}\right] $$ where denotes the -th row of and is the -entry of the -dimensional matrix .
It can be shown (???) that the controllability matrix of the above statespace model has rank at most . Therefore we may construct an equivalent model with state dimension .
Now we simply do this construction for all columns of the two factors and and then “column-bind” the result. The statespace realisation then has state dimension .
For “wide” matrices () we consider the transpose of the two factors. This gives statespace realisations with
- proof ???
- kann man die Zustandsraum Darstellung direkt “hinschreiben”?
See Ops.ratm() in particular a * b.
Transpose
$$ \left[\begin{array}{@{}c|c@{}} A' & C' \\ \hline B' & D' \end{array}\right] $$
See t().
Hermitean Transpose
$$ \left[\begin{array}{@{}c|c@{}} A^{-T} & (CA^{-1})' \\ \hline (-A^{-1}B)' & D' - B' A^{-T} C' \end{array}\right] $$
See Ht().
Derivative
with The corresponding statespace realization is
$$
\left[\begin{array}{@{}cc|c@{}}
A & I & B \\
0 & A & AB \\ \hline
CA & C & CB
\end{array}\right]
$$ See derivative.stsp().
Construct statespace representation of rational matrix in LMFD form
Suppose
with
,
is given. The construction, detailed below, assumes that
is non singular and hence we rewrite the polynomial
as
.
Furthermore we assume, w.l.o.g that
.
The powerseries expansion of
is
.
Then a statespace realization of
is $$
\left[\begin{array}{@{}cccc|c@{}}
a_1 & \cdots & a_{p-1} & a_p & k_p \\
I_m & \cdots & 0 & 0 & k_{p-1} \\
\vdots& \ddots & \vdots & \vdots & \vdots \\
0 & \cdots & I_m & 0 & k_1 \\ \hline
0 & \cdots & 0 & I_m & k_0
\end{array}\right]
$$ This scheme is implemented in as.stsp.lmfd (and
as.stsp.polm).
Construct statespace representation of rational matrix in RMFD form
Likewise, it is possible to write an RMFD $k(z)=d(z) c^{-1}(z) $, where , and is given. Again, we simplify the exposition by setting the non-singular equal to the -dimensional identity matrix . In the case , the state space realization of is
$$
\left[\begin{array}{@{}cccc|c@{}}
c_1 & \cdots & c_{p-1} & c_p & I_m \\
I_m & \cdots & 0 & 0 & 0 \\
\vdots& \ddots & \vdots & \vdots & \vdots \\
0 & \cdots & I_m & 0 & 0 \\ \hline
d_1 & \cdots & d_{q-1} & d_q & d_0
\end{array}\right]
$$ If
,
some zeros appear in the
matrix. If
,
some zeros are padded into the
matrix. This scheme is implemented in as.stsp.rmfd.
Schur, QZ Decomposition, Lyapunov and Riccati Equations
Functional Area: Numerical Methods (#7)
See src/lyapunov.cpp,
inst/include/rationalmatrices_lyapunov.h, and
R/lyapunov.R for implementation.
Lyapunov Equation
The generalized (non-symmetric) Lyapunov equation is
With a Schur factorization of and we obtain a form with triangular matrices ,
- 22-block: solve for
- 12-block: solve for
- 21-block: solve for
- continue with 11-block
we could also consider the non-square case, where is , is and is .
Balancing and Balanced Truncation
Consider a pair of Grammians . We first compute the sqare roots and and then consider the SVD1 of :
where the diagonal blocks , are of size and respectively. We assume that is positive definite and set and note that We extend these matrices to square matrices
such that , i.e. . To this end we consider the SVD’s
and set and .
We now consider the transformed statespace realization $$ \left[\begin{array}{@{}c|c@{}} A & B \\ \hline C & D \end{array}\right] \; \longrightarrow \; \left[\begin{array}{@{}cc|c@{}} T_1 A S_1 & T_1 A S_2 & T_1 B \\ T_2 A S_1 & T_2 A S_2 & T_2 B \\ \hline C S_1 & C S_2 & D \end{array}\right] $$ and the Grammians are transformed as and .
Here we have used that and . Due to the block diagonal structure of and we also immediately get
and
The following scenarios are of particular interest:
- : The model is minimal and the above procedure renders the statespace realization into balanced form.
- and : The above procedure renders the model in a form where the controllable and observable states are clearly seperated from the non observable or non controllable states. In particular the truncated model $$ \left[\begin{array}{@{}c|c@{}} T_1 A S_1 & T_1 B \\ T_2 A S_1 & T_2 B \\ \hline C S_1 & D \end{array}\right] $$ is equivalent to the original model and it is minimal and in balanced form
- If but then the truncated model is just an approximation of the original model. The quality depends on the size of the neglected singular values. Note also that the truncated model is not balanced.
Note that the determination of the number of non zeroe singular values is quite tricky. (difficult due to numerics). Currently the number of non zero singular values is chosen as the number of singular values which satisfy where is a user defined tolerance parameter.
Reflect Poles and Zeroes
All-Pass Matrices
A rational matrix is all-pass if . If is a statespace realization of then the product has a realization given by $$ \left[\begin{array}{@{}cc|c@{}} A & -BB'A^{-T} & BD' - BB'A^{-T}C' \\ 0 & A^{-T} & A^{-T}C' \\ \hline C & -DB'A^{-T} & DD' - DB' A^{-T} C' \end{array}\right] $$
A state transformation gives
$$ \left[ \begin{array}{@{}cc|c@{}} I & X & 0 \\ 0 & I & 0 \\ \hline 0 & 0 & I \end{array} \right] \left[\begin{array}{@{}cc|c@{}} A & -BB'A^{-T} & BD' - BB'A^{-T}C' \\ 0 & A^{-T} & A^{-T}C' \\ \hline C & -DB'A^{-T} & DD' - DB' A^{-T} C' \end{array}\right] \left[ \begin{array}{@{}cc|c@{}} I & -X & 0 \\ 0 & I & 0 \\ \hline 0 & 0 & I \end{array} \right] = \left[\begin{array}{@{}cc|c@{}} A & XA^{-T}-AX-BB'A^{-T} & BD' - BB'A^{-T}C' + X A^{-T}C'\\ 0 & A^{-T} & A^{-T}C' \\ \hline C & -DB'A^{-T}-CX & DD' - DB' A^{-T} C' \end{array}\right] $$
The (1,1) block is not controllable and the (2,2) block is not observable if
Furthermore must hold.
It follows that we end up with the system $$ \left[\begin{array}{@{}cc|c@{}} A & 0 & 0\\ 0 & A^{-T} & A^{-T}C' \\ \hline C & 0 & I \end{array}\right] $$ whose transfer function is equal to i.e. the identity matrix as required.
In the following we need to construct an allpass matrix with given (or given ).
- From the above Lyapunov equation determine .
- Determine the row space of from the left kernel of .
- Determine the scaling matrix from the requirement .
zero cancellations
Let be given. We want to find an allpass transfer function such some of the poles of are cancelled in .
Consider the state transformation of the realization of :
$$ \left[ \begin{array}{@{}ccc|c@{}} I & 0 & 0 & 0 \\ 0 & I & 0 & 0 \\ I & 0 & I & 0 \\ \hline 0 & 0 & 0 & I \end{array} \right] \left[ \begin{array}{@{}ccc|c@{}} A_{11} & A_{12} & B_{1}\hat{C} & B_{1}\hat{D} \\ A_{21} & A_{22} & B_{2}\hat{C} & B_{2}\hat{D} \\ 0 & 0 & \hat{A} & \hat{B} \\ \hline C_{1} & C_{2} & D\hat{C} & D\hat{D} \end{array} \right] \left[ \begin{array}{@{}ccc|c@{}} I & 0 & 0 & 0 \\ 0 & I & 0 & 0 \\ -I & 0 & I & 0 \\ \hline 0 & 0 & 0 & I \end{array} \right] = \left[ \begin{array}{@{}ccc|c@{}} A_{11}-B_{1}\hat{C} & A_{12} & B_{1}\hat{C} & B_{1}\hat{D}\\ A_{21}-B_{2}\hat{C} & A_{22} & B_{2}\hat{C} & B_2 \hat{D} \\ A_{11}-B_{1}\hat{C} -\hat{A} & A_{12} & \hat{A} + B_{1}\hat{C} & B_{1}\hat{D} + \hat{B} \\ \hline C_{1}-D\hat{C} & C_{2} & D\hat{C} & D\hat{D} \end{array} \right] $$ The block is not observable if
pole cancellations
Let be given. We want to find an allpass transfer function such some of the poles of are cancelled in .
Consider the state transformation of the realization of :
$$ \left[ \begin{array}{@{}ccc|c@{}} I & 0 & -I & 0 \\ 0 & I & 0 & 0 \\ 0 & 0 & I & 0 \\ \hline 0 & 0 & 0 & I \end{array} \right] \left[ \begin{array}{@{}ccc|c@{}} A_{11} & A_{12} & B_{1}\hat{C} & B_{1}\hat{D} \\ A_{21} & A_{22} & B_{2}\hat{C} & B_{2}\hat{D} \\ 0 & 0 & \hat{A} & \hat{B} \\ \hline C_{1} & C_{2} & D\hat{C} & D\hat{D} \end{array} \right] \left[ \begin{array}{@{}ccc|c@{}} I & 0 & I & 0 \\ 0 & I & 0 & 0 \\ 0 & 0 & I & 0 \\ \hline 0 & 0 & 0 & I \end{array} \right] = \left[ \begin{array}{@{}ccc|c@{}} A_{11} & A_{12} & A_{11} + B_{1}\hat{C} - \hat{A} & B_{1}\hat{D} - \hat{B} \\ A_{21} & A_{22} & A_{21} + B_{2}\hat{C} & B_2 \hat{D} \\ 0 & 0 & \hat{A} & \hat{B} \\ \hline C_{1} & C_{2} & C_1 + D\hat{C} & D\hat{D} \end{array} \right] $$
The (1,1) block is not controllable if
Hence the “” matrices of the inverse of are determined.
QR Decomposition, Rank and Null Space
Functional Area: Utilities & Helpers (#10)
Supporting theory for rank computation and null space analysis in polynomial operations.
Consider the following example
tol = 1e-7
tol2 = tol/10
x = diag(c(1,tol2, tol2^2, tol2^3))
qr_x = qr(x, tol = tol)
x
#> [,1] [,2] [,3] [,4]
#> [1,] 1 0e+00 0e+00 0e+00
#> [2,] 0 1e-08 0e+00 0e+00
#> [3,] 0 0e+00 1e-16 0e+00
#> [4,] 0 0e+00 0e+00 1e-24
qr_x$rank
#> [1] 4
qr_x$pivot
#> [1] 1 2 3 4
qr.R(qr_x)
#> [,1] [,2] [,3] [,4]
#> [1,] -1 0e+00 0e+00 0e+00
#> [2,] 0 -1e-08 0e+00 0e+00
#> [3,] 0 0e+00 -1e-16 0e+00
#> [4,] 0 0e+00 0e+00 1e-24Is this the desired result for the rank (and the null space)? The
main reason for this somewhat strange results is that qr()
considers a column as zero, if and only if the projection onto the space
spanned by the previous columns reduces the norm by a factor which is
larger than tol.
The following example works as desired
x[1,] = 1
x[nrow(x), ncol(x)] = 1
qr_x = qr(x, tol = tol)
x
#> [,1] [,2] [,3] [,4]
#> [1,] 1 1e+00 1e+00 1
#> [2,] 0 1e-08 0e+00 0
#> [3,] 0 0e+00 1e-16 0
#> [4,] 0 0e+00 0e+00 1
qr_x$rank
#> [1] 2
qr_x$pivot
#> [1] 1 4 2 3
qr.R(qr_x)
#> [,1] [,2] [,3] [,4]
#> [1,] -1 -1 -1e+00 -1e+00
#> [2,] 0 -1 0e+00 0e+00
#> [3,] 0 0 -1e-08 0e+00
#> [4,] 0 0 0e+00 1e-16