Appeal No. 2000-0189 Page 4
Application No. 08/864,460
Holmes teaches generating a continuous distribution
probability density HMM from a quantized vector
series for training and recognition: “A more widely
used method for coping with the fact that particular
sets of finely quantized feature values will occur
only very rarely is to represent the distribution of
feature vectors by some simple parametric model, and
to use the calculated probabilities from this model
to supply the probability distributions in the
training and recognition processes. The Baum-Welch
re-estimation must then be used to optimize the
parameters of the feature distribution model, rather
than the probabilities of the particular feature
vectors" (p. 143). Said computation of optimum
parameters of the feature distribution model (for
each state, tacitly understood) is just the recited
calculation of the incidence of the labels in each
state, from the HMM state likelihood functions
described by said parameters (claim 3), determined
from the training vectors.
Holmes also teaches clustering and using
nearest-neighbor templates representing the average
properties in each cluster (p. 125), and vector
quantizing training (and test) patterns into a label
series of clusters to which they belong ("It is
possible to make a useful approximation to the
feature vectors that actually occur by choosing only
a small subset (perhaps about 100) of feature
vectors, and replacing each measured vector by the
one in the subset that is `nearest' to it according
to a suitable distance metric. This process is
known as vector quantization", p. 142, emphasis in
original). As discussed above, since the
Specification does not teach a two-step
quantization, the examiner has interpreted the
recited "vectors so quantized" as a reference to the
inherent quantization involved in the measurement of
continuous data.
Page: Previous 1 2 3 4 5 6 7 8 9 Next
Last modified: November 3, 2007