Appeal No. 1998-0875 Application No. 08/210,529 synchronization with the speaker’s speech received at the receiver (pages 12 and 13; Figure 7). In Ejiri, terminal 2 (Figure 3) contains a voice recognizer unit 4 that recognizes a voice received over line 1 (translation, page 6). The voice data output from voice recognizer unit 4 is sent to control unit 6 where a query is made to image storage device 12 for a previously stored image of a person that matches the recognized voice. If a match is found, then a synthesized image of the person in image storage device 12 that matches the recognized voice is combined with the received voice by control unit 6 to give the viewers of the display 11 the illusion that they actually see the person talking to them (translation, pages 6, 7, 9 and 10). Appellant argues (Brief, page 11) that “Welsh deals with coding video signals corresponding to images at the transmitting side, at two different rates.” According to the appellant (brief, page 11), “[t]he slow moving portion of each image is coded at one frame rate and the faster moving portions of each image is [sic, are] coded at a faster frame rate.” Appellant concludes (brief, page 12) that “[t]here is 5Page: Previous 1 2 3 4 5 6 7 8 9 NextLast modified: November 3, 2007