Appeal No. 2005-2290 Application No. 09/896,199 show that people prefer to use hand gestures in combination with speech in a virtual environment, since they allow the user to interact without special training or special apparatus” (page 121 of Pavlovic-the examiner’s emphasis, at page 4 of the answer). Appellant responds by arguing that Inagaki’s system merely detects the presence of speech (specifically, the presence of speech of a speaking attendee, and then highlights the PIP of the speaking attendee). Inagaki does not however, contends appellant, disclose or suggest the changing of a PIP display characteristic in response to a received audio command and a related gesture from a user. Since Inagaki does not detect any content of the speech of the speaking attendee, appellant contends that it cannot be said that Inagaki determines if a command is being spoken. Accordingly, argues appellant, there would have been no motivation for combining Inagaki with the gestures taught by Pavlovic. We have considered the evidence before us, including the arguments of appellant and the examiner, and we conclude therefrom that the examiner has established a prima facie case of obviousness which has not been overcome by appellant. Accordingly, we will sustain the rejection of claims 1-20 under 35 U.S.C. §103. Inagaki clearly teaches the movement of a camera to a different conference attendee, dependent on the attendee’s voice (see column 11, line 65, through column 12, line 25). Since a different attendee will appear larger on the display screen, clearly 5Page: Previous 1 2 3 4 5 6 7 8 9 10 11 NextLast modified: November 3, 2007